diff --git a/packages/web/docs/src/content/_meta.ts b/packages/web/docs/src/content/_meta.ts index bbee174967..57681f539e 100644 --- a/packages/web/docs/src/content/_meta.ts +++ b/packages/web/docs/src/content/_meta.ts @@ -5,6 +5,7 @@ export default { 'high-availability-cdn': 'High-Availability CDN', dashboard: 'Dashboard', gateway: 'Gateway', + logger: 'Logger', management: 'Management', 'other-integrations': 'Other Integrations', 'api-reference': 'CLI/API Reference', diff --git a/packages/web/docs/src/content/api-reference/gateway-cli.mdx b/packages/web/docs/src/content/api-reference/gateway-cli.mdx index 556312577f..caa79e73fe 100644 --- a/packages/web/docs/src/content/api-reference/gateway-cli.mdx +++ b/packages/web/docs/src/content/api-reference/gateway-cli.mdx @@ -19,51 +19,105 @@ hive-gateway --help which will print out the following: -{/* IMPORTANT: please dont forget to update the following when arguments change. simply run `node --import tsx packages/hive-gateway/src/bin.ts --help` and copy over the text */} +{/* IMPORTANT: please dont forget to update the following when arguments change. simply run `node --import tsx packages/gateway/src/bin.ts --help` and copy over the text */} ``` Usage: hive-gateway [options] [command] -Federated GraphQL Gateway +Unify and accelerate your data graph across diverse services with Hive Gateway, which seamlessly +integrates with Apollo Federation. Options: - --fork count of workers to spawn. uses "24" (available parallelism) workers when NODE_ENV is "production", - otherwise "1" (the main) worker (default: 1) (env: FORK) - -c, --config-path path to the configuration file. defaults to the following files respectively in the current working - directory: gateway.ts, gateway.mts, gateway.cts, gateway.js, gateway.mjs, gateway.cjs (env: - CONFIG_PATH) + --fork number of workers to spawn. (default: 1) (env: + FORK) + -c, --config-path path to the configuration file. defaults to + the following files respectively in the + current working directory: gateway.ts, + gateway.mts, gateway.cts, gateway.js, + gateway.mjs, gateway.cjs (env: CONFIG_PATH) -h, --host host to use for serving (default: 0.0.0.0) - -p, --port port to use for serving (default: 4000) (env: PORT) - --polling schema polling interval in human readable duration (default: 10s) (env: POLLING) + -p, --port port to use for serving (default: 4000) (env: + PORT) + --polling schema polling interval in human readable + duration (default: 10s) (env: POLLING) --no-masked-errors don't mask unexpected errors in responses - --masked-errors mask unexpected errors in responses (default: true) - --hive-usage-target Hive registry target to which the usage data should be reported to. requires the - "--hive-usage-access-token " option (env: HIVE_USAGE_TARGET) - --hive-usage-access-token Hive registry access token for usage metrics reporting. requires the "--hive-usage-target " - option (env: HIVE_USAGE_ACCESS_TOKEN) - --hive-persisted-documents-endpoint [EXPERIMENTAL] Hive CDN endpoint for fetching the persisted documents. requires the - "--hive-persisted-documents-token " option - --hive-persisted-documents-token [EXPERIMENTAL] Hive persisted documents CDN endpoint token. requires the - "--hive-persisted-documents-endpoint " option - --hive-cdn-endpoint Hive CDN endpoint for fetching the schema (env: HIVE_CDN_ENDPOINT) - --hive-cdn-key Hive CDN API key for fetching the schema. implies that the "schemaPathOrUrl" argument is a url (env: - HIVE_CDN_KEY) - --apollo-graph-ref Apollo graph ref of the managed federation graph (@) (env: APOLLO_GRAPH_REF) - --apollo-key Apollo API key to use to authenticate with the managed federation up link (env: APOLLO_KEY) + --masked-errors mask unexpected errors in responses (default: + true) + --opentelemetry [exporter-endpoint] Enable OpenTelemetry integration with an + exporter using this option's value as + endpoint. By default, it uses OTLP HTTP, use + "--opentelemetry-exporter-type" to change the + default. (env: OPENTELEMETRY) + --opentelemetry-exporter-type OpenTelemetry exporter type to use when + setting up OpenTelemetry integration. Requires + "--opentelemetry" to set the endpoint. + (choices: "otlp-http", "otlp-grpc", default: + "otlp-http", env: OPENTELEMETRY_EXPORTER_TYPE) + --hive-registry-token [DEPRECATED] please use "--hive-target" and + "--hive-access-token" (env: + HIVE_REGISTRY_TOKEN) + --hive-usage-target [DEPRECATED] please use --hive-target instead. + (env: HIVE_USAGE_TARGET) + --hive-target Hive registry target to which the usage and + tracing data should be reported to. Requires + either "--hive-access-token ", + "--hive-usage-access-token " or + "--hive-trace-access-token" option (env: + HIVE_TARGET) + --hive-access-token Hive registry access token for usage metrics + reporting and tracing. Enables both usage + reporting and tracing. Requires the + "--hive-target " option (env: + HIVE_ACCESS_TOKEN) + --hive-usage-access-token Hive registry access token for usage + reporting. Enables Hive usage report. Requires + the "--hive-target " option. It can't + be used together with "--hive-access-token" + (env: HIVE_USAGE_ACCESS_TOKEN) + --hive-trace-access-token Hive registry access token for tracing. + Enables Hive tracing. Requires the + "--hive-target " option. It can't be + used together with "--hive-access-token" (env: + HIVE_TRACE_ACCESS_TOKEN) + --hive-trace-endpoint Hive registry tracing endpoint. (default: + "https://api.graphql-hive.com/otel/v1/traces", + env: HIVE_TRACE_ENDPOINT) + --hive-persisted-documents-endpoint [EXPERIMENTAL] Hive CDN endpoint for fetching + the persisted documents. Requires the + "--hive-persisted-documents-token " + option + --hive-persisted-documents-token [EXPERIMENTAL] Hive persisted documents CDN + endpoint token. Requires the + "--hive-persisted-documents-endpoint + " option + --hive-cdn-endpoint Hive CDN endpoint for fetching the schema + (env: HIVE_CDN_ENDPOINT) + --hive-cdn-key Hive CDN API key for fetching the schema. + implies that the "schemaPathOrUrl" argument is + a url (env: HIVE_CDN_KEY) + --apollo-graph-ref Apollo graph ref of the managed federation + graph (@) (env: + APOLLO_GRAPH_REF) + --apollo-key Apollo API key to use to authenticate with the + managed federation up link (env: APOLLO_KEY) --disable-websockets Disable WebSockets support - --jit Enable Just-In-Time compilation of GraphQL documents (env: JIT) + --jit Enable Just-In-Time compilation of GraphQL + documents (env: JIT) (env: JIT) -V, --version output the version number --help display help for command Commands: - supergraph [options] [schemaPathOrUrl] serve a Federation supergraph provided by a compliant composition tool such as Mesh Compose or Apollo - Rover - subgraph [schemaPathOrUrl] serve a Federation subgraph that can be used with any Federation compatible router like Apollo - Router/Gateway - proxy [options] [endpoint] serve a proxy to a GraphQL API and add additional features such as monitoring/tracing, caching, rate - limiting, security, and more + supergraph [options] [schemaPathOrUrl] serve a Federation supergraph provided by a + compliant composition tool such as Mesh + Compose or Apollo Rover + subgraph [schemaPathOrUrl] serve a Federation subgraph that can be used + with any Federation compatible router like + Apollo Router/Gateway + proxy [options] [endpoint] serve a proxy to a GraphQL API and add + additional features such as + monitoring/tracing, caching, rate limiting, + security, and more help [command] display help for command - ``` All arguments can also be configured in the config file. @@ -79,7 +133,12 @@ configuration file if you provide these environment variables. - `HIVE_CDN_ENDPOINT`: The endpoint of the Hive Registry CDN - `HIVE_CDN_KEY`: The API key provided by Hive Registry to fetch the schema -- `HIVE_REGISTRY_TOKEN`: The token to push the metrics to Hive Registry +- `HIVE_TARGET`: The target for usage reporting and observability in Hive Console +- `HIVE_USAGE_TARGET` (deprecated, use `HIVE_TARGET`): The target for usage reporting and + observability in Hive Console +- `HIVE_ACCESS_TOKEN`: The access token used for usage reporting and observability in Hive Console +- `HIVE_USAGE_ACCESS_TOKEN`: The access token used for usage reporting only in Hive Console +- `HIVE_TRACE_ACCESS_TOKEN`: The access token used for observability only in Hive Console [Learn more about Hive Registry integration here](/docs/gateway/supergraph-proxy-source) diff --git a/packages/web/docs/src/content/api-reference/gateway-config.mdx b/packages/web/docs/src/content/api-reference/gateway-config.mdx index 9863a46d52..2f47d5fa43 100644 --- a/packages/web/docs/src/content/api-reference/gateway-config.mdx +++ b/packages/web/docs/src/content/api-reference/gateway-config.mdx @@ -451,6 +451,54 @@ different phases of the GraphQL execution to manipulate or track the entire work [See dedicated plugins feature page for more information](/docs/gateway/other-features/custom-plugins) +### `openTelemetry` + +This options allows to enable OpenTelemetry integration and customize its behaviour. + +[See dedicated Monitoring/Tracing fearure page for more information](/docs/gateway/monitoring-tracing) + +#### `useContextManager` + +Use standard `@opentelemetry/api` Context Manager to keep track of current span. This is an advanced +option that should be used carefully, as it can break your custom plugin spans. + +#### `inheritContext` + +If true (the default), the HTTP span will be created with the active span as parent. If false, the +HTTP span will always be a root span, which will create it's own trace for each request. + +#### `propagateContext` + +If true (the default), uses the registered propagators to propagate the active context to upstream +services. + +#### `configureDiagLogger` + +If true (the default), setup the standard `@opentelemetry/api` diag API to use the Hive Gatewat +logger. A child logger is created with the prefix `[opentelemetry][diag] `. + +#### `flushOnDispose` + +If truthy (the default), the registered span processor will be forcefully flushed when the Hive +Gateway is about to shutdown. To flush, the `forceFlush` method is called (if it exists), but you +can change the method to call by providing a string as a value to this option. + +#### `traces` + +Pass `true` to enable tracing integration with all spans available. + +This option can also be an object for more fine grained configuration. + +##### `tracer` + +The `Tracer` instance to be used. The default is a tracer with the name `gateway` + +#### `spans` + +An object with each keys being a span name, and the value being either a boolean or a filtering +function to control which span should be reported. +[See Reported Spans and Events for details](/docs/gateway/monitoring-tracing#reported-spans). + ### `cors` [See dedicated CORS feature page for more information](/docs/gateway/other-features/security/cors) diff --git a/packages/web/docs/src/content/gateway/deployment/node-frameworks/fastify.mdx b/packages/web/docs/src/content/gateway/deployment/node-frameworks/fastify.mdx index 0ad615a1f1..29bc70e214 100644 --- a/packages/web/docs/src/content/gateway/deployment/node-frameworks/fastify.mdx +++ b/packages/web/docs/src/content/gateway/deployment/node-frameworks/fastify.mdx @@ -18,17 +18,14 @@ So you can benefit from the powerful plugins of Fastify ecosystem with Hive Gate ## Example -In order to connect Fastify's logger to the gateway, you need to install the -`@graphql-hive/logger-pino` package together with `@graphql-hive/gateway-runtime` and `fastify`. - ```sh npm2yarn -npm i @graphql-hive/gateway-runtime @graphql-hive/logger-pino fastify +npm i @graphql-hive/gateway-runtime @graphql-hive/logger fastify ``` ```ts import fastify, { type FastifyReply, type FastifyRequest } from 'fastify' -import { createGatewayRuntime } from '@graphql-hive/gateway-runtime' -import { createLoggerFromPino } from '@graphql-hive/logger-pino' +import { createGatewayRuntime, Logger } from '@graphql-hive/gateway-runtime' +import { PinoLogWriter } from '@graphql-hive/logger/writers/pino' // Request ID header used for tracking requests const requestIdHeader = 'x-request-id' @@ -52,8 +49,10 @@ interface FastifyContext { } const gateway = createGatewayRuntime({ - // Integrate Fastify's logger / Pino with the gateway logger - logging: createLoggerFromPino(app.log), + // Use Fastify's logger (Pino) with Hive Logger + logging: new Logger({ + writers: [new PinoLogWriter(app.log)] + }), // Align with Fastify requestId: { // Use the same header name as Fastify diff --git a/packages/web/docs/src/content/gateway/deployment/serverless/index.mdx b/packages/web/docs/src/content/gateway/deployment/serverless/index.mdx index 556af96795..6f0c224b79 100644 --- a/packages/web/docs/src/content/gateway/deployment/serverless/index.mdx +++ b/packages/web/docs/src/content/gateway/deployment/serverless/index.mdx @@ -105,7 +105,7 @@ You can then generate the supergraph file using the `mesh-compose` CLI from npx mesh-compose supergraph ``` -#### Compose supegraph with Apollo Rover +#### Compose supergraph with Apollo Rover Apollo Rover only allow to export supegergraph as a GraphQL document, so we will have to wrap this output into a JavaScript file: diff --git a/packages/web/docs/src/content/gateway/logging-and-error-handling.mdx b/packages/web/docs/src/content/gateway/logging-and-error-handling.mdx index 76723262c5..6e685a1d5d 100644 --- a/packages/web/docs/src/content/gateway/logging-and-error-handling.mdx +++ b/packages/web/docs/src/content/gateway/logging-and-error-handling.mdx @@ -5,185 +5,399 @@ description: how to handle errors and mask them to prevent leaking sensitive information to the client. --- -# Logging & Error Handling +import { Callout, Tabs } from '@theguild/components' -import { Callout } from '@theguild/components' +## Logging -Hive Gateway provides a built-in logger that allows you to log information about the Gateway's -lifecycle, errors, and other events. The default logger uses JavaScript's +Hive Gateway uses the [Hive Logger](/docs/logger) for logging about the Gateway's lifecycle, errors, +and other events. The default logger uses JavaScript's [`console`](https://developer.mozilla.org/en-US/docs/Web/API/console) API, but you can also provide a custom logger implementation. By default, Hive Gateway logs the critical masked errors so that the sensitive information is not exposed to the client. -## Logging + + The Hive Logger is a powerful tool with many features. You can learn more about it in the [Hive + Logger documentation](/docs/logger). + -Hive Gateway provides a built-in logging system that allows you to log information about the -Gateway's lifecycle, errors, and other events. The default logger uses JavaScript's -[`console`](https://developer.mozilla.org/en-US/docs/Web/API/console) API, but you can also provide -a custom logger implementation. +### Using the Logger + +The `log` prop is now used in all APIs, contexts, and plugin options. It's short and intuitive, +making it easier to understand and use. + +#### Context + +The context object passed to plugins and hooks will always have the relevant logger instance +provided throug the `log` property. Same goes for all of the transports' contexts. Each of the +transport contexts now has a `log` prop. + +##### Plugin Setup Function + +The `log` property in the plugin setup function contains the root-most logger instance. + +```ts filename="gateway.config.ts" +import { defineConfig } from '@graphql-hive/gateway' +import { myPlugins } from './my-plugins' + +export const gatewayConfig = defineConfig({ + plugins(ctx) { + ctx.log.info('Loading plugins...') + return [...myPlugins] + } +}) +``` + +##### Plugin Hooks -### Logging in JSON format +Across all plugins, hooks and contexts, the `log` property will always be provided. -By default without any production environment variable, Hive Gateway prints the logs in human -readable format. However, in production (when `NODE_ENV` is `production`) Hive Gateway prints the -logs in JSON format, but if you want to enable it in regular mode, you can pass `LOG_FORMAT=json` as -an environment variable. +It is now the highly recommended to use the logger from the context at all times because it contains +the necessary metadata for increased observability, like the request ID or the execution step. + +```diff filename="gateway.config.ts" +import { defineConfig } from '@graphql-hive/gateway'; + +export const gatewayConfig = defineConfig({ +- plugins({ log }) { ++ plugins() { + return [ + { + onExecute({ context }) { +- log.info('Executing...'); ++ context.log.info('Executing...'); + }, + onDelegationPlan(context) { +- log.info('Creating delegation plan...'); ++ context.log.info('Creating delegation plan...'); + }, + onSubgraphExecute(context) { +- log.info('Executing on subgraph...'); ++ context.log.info('Executing on subgraph...'); + }, + onFetch({ context }) { +- log.info('Fetching data...'); ++ context.log.info('Fetching data...'); + }, + }, + ]; + }, +}); +``` + +Will log with the necessary metadata for increased observability, like this: + +``` +2025-04-10T14:00:00.000Z INF Executing... + requestId: "0b1dce69-5eb0-4d7b-97d8-1337535a620e" +2025-04-10T14:00:00.000Z INF Creating delegation plan... + requestId: "0b1dce69-5eb0-4d7b-97d8-1337535a620e" + subgraph: "accounts" +2025-04-10T14:00:00.000Z INF Executing on subgraph... + requestId: "0b1dce69-5eb0-4d7b-97d8-1337535a620e" + subgraph: "accounts" +2025-04-10T14:00:00.000Z INF Fetching data... + requestId: "0b1dce69-5eb0-4d7b-97d8-1337535a620e" +``` ### Log Levels -Hive Gateway uses 4 log levels `debug`, `info`, `warn` and `error`. By default, Hive Gateway will -only log info, warn and error messages. +The default logger uses the `info` log level which will make sure to log only `info`+ logs. +Available log levels are: + +- false (disables logging altogether) +- `trace` +- `debug` +- `info` _default_ +- `warn` +- `error` + +##### Change on Start + +The `logging` option during Hive Gateway setup accepts: + +1. `true` to enable and log using the `info` level +1. `false` to disable logging altogether +1. A Hive Logger instance +1. A string log level (e.g., `debug`, `info`, `warn`, `error`) + + + + + +```ts filename="gateway.config.ts" +import { defineConfig } from '@graphql-hive/gateway' + +export const gatewayConfig = defineConfig({ + logging: 'debug' +}) +``` -#### `error` + -- Only log unexpected errors including masked errors + -#### `warn` +```ts filename="index.ts" +import { createGatewayRuntime } from '@graphql-hive/runtime-gateway' -- All prior log levels -- Deprecation notices -- Potential issues that could lead to errors +export const gateway = createGatewayRuntime({ + logging: 'debug' +}) +``` + + -#### `info` + -- All prior log levels -- Information about the current state of the system +##### Change Dynamically -#### `debug` +A powerful ability of Hive Logger is allowinh you to change the log level dynamically at runtime. +This is useful for debugging and testing purposes. You can change the log level by calling the +`setLogLevel` method on the logger instance. -- All prior log levels -- Processing of GraphQL parameters -- Parsing of GraphQL parameters -- Execution or subscription start -- Received GraphQL operation variables -- Execution or subscription end -- Health checks -- Subgraph requests -- All HTTP requests and responses -- Supergraph fetching -- Any caching operations +Lets write a plugin that toggles the `debug` log level when a secure HTTP request is made on the +`/toggle-debug` path. - If you want to learn more about the life-cycle of the Gateway, you can enable debug logs. Setting - the `DEBUG=1` environment variable or passing `debug` to `logging` parameter will enable debug - logs that include all operations done to the upstream services, query plans etc. + +Please be very carefuly with securing your logger. Changing the log level from an HTTP request can +be a security risk and should be avoided in production environments. **Use this feature with caution +and proper security measures.** + -### Integration with Winston (only Node.js) +```ts filename="toggle-debug.ts" +import { GatewayPlugin, Logger } from '@graphql-hive/gateway' + +interface ToggleDebugOptions { + /** + * A secret value that has to be provided alongside the + * request authenticating its origin. + */ + secret: string + /** + * The root most logger, all of the child loggers will + * inherit its log level. + */ + rootLog: Logger +} -By default, Hive Gateway uses the built-in `console` logger. However, you can also integrate Hive -Gateway with [Winston](https://github.com/winstonjs/winston) on Node.js environments. +export function useToggleDebug(opts: ToggleDebugOptions): GatewayPlugin { + return { + onRequest({ request }) { + if (!request.url.endsWith('/toggle-debug')) { + return + } -You need to install `winston` and `@graphql-hive/winston` packages to use Winston with Hive Gateway. + const secret = request.headers.get('x-toggle-debug-secret') + if (secret !== opts.secret) { + return + } -```sh npm2yarn -npm i winston @graphql-hive/winston + // request is authenticated, we can change the log level + if (opts.rootLog.level === 'debug') { + opts.rootLog.setLevel('info') + } else { + opts.rootLog.setLevel('debug') + } + + opts.rootLog.warn('Log level changed to %s', opts.rootLog.level) + } + } +} ``` -```ts -import { createLogger, format, transports } from 'winston' -import { defineConfig } from '@graphql-hive/gateway' -import { createLoggerFromWinston } from '@graphql-hive/winston' +And finally use the plugin with Hive Gateway: -// Create a Winston logger -const winstonLogger = createLogger({ - level: 'info', - format: format.combine(format.timestamp(), format.json()), - transports: [new transports.Console()] -}) +```ts filename="gateway.config.ts" +import { defineConfig } from '@graphql-hive/gateway' +import { useToggleDebug } from './toggle-debug' export const gatewayConfig = defineConfig({ - // Create an adapter for Winston - logging: createLoggerFromWinston(winstonLogger) + plugins(ctx) { + return [ + useToggleDebug({ + secret: 'wow-very-much-secret', + // the plugins factory function provides the root logger, + // all of the child loggers will inherit its log level + rootLog: ctx.log + }) + ] + } }) ``` -### Integration with Pino (only Node.js) +Finally, issue the following request to toggle the debug log level: + +```sh +curl -H 'x-toggle-debug-secret: wow-very-much-secret' \ + http://localhost:4000/toggle-debug +``` -Like Winston, you can also use [Pino](https://getpino.io/) with Hive Gateway on Node.js -environments. +### Writing Logs in JSON format -```ts -import pino from 'pino' -import { defineConfig } from '@graphql-hive/gateway' -import { createLoggerFromPino } from '@graphql-hive/logger-pino' +By default, Hive Gateway prints the logs in human readable format. However, in production +environments where you use tools for consuming the logs, it's advised to print logs in JSON format. + +#### Toggle with Environment Variable + +To enable JSON logs, pass the `LOG_JSON=1` as an environment variable to enable the +[JSON writer](/docs/logger#jsonlogwriter). + +#### Use the JSON Log Writer + +You can also use the JSON writer directly in your configuration. + +```ts filename="gateway.config.ts" +import { defineConfig, JSONLogWriter, Logger } from '@graphql-hive/gateway' export const gatewayConfig = defineConfig({ - logging: createLoggerFromPino(pino({ level: 'info' })) + logging: new Logger({ writers: [new JSONLogWriter()] }) }) ``` -### Custom Logger +#### Pretty Printing JSON + +When using the JSON writer (either by toggling it using the environment variable or using the JSON +writer directly), you can use the `LOG_JSON_PRETTY=1` environment variable to enable pretty-printing +the JSON logs. + +### Custom Logger Writers + +The new Hive Logger is designed to be extensible and allows you to create custom logger adapters by +implementing "log writers" instead of the complete logger interface. The `LogWriter` is simply: -If you want to implement your own logger, you can use the interface `Logger` from -`@graphql-hive/gateway`. The logger should implement the following methods: +```ts +import { Attributes, LogLevel } from '@graphql-hive/logger' + +interface LogWriter { + write( + level: LogLevel, + attrs: Attributes | null | undefined, + msg: string | null | undefined + ): void | Promise + flush?(): void | Promise +} +``` -- `log(...args: any[]): void` -- `error(...args: any[]): void` -- `warn(...args: any[]): void` -- `info(...args: any[]): void` -- `debug(lazyMessageArgs: ...(() => any | any)[]): void` -- `child(nameOrMeta: string | Record): Logger` +As you may see, it's very simple and allows you, to not only use your favourite logger like pino or +winston, but also implement custom writers that send logs to a HTTP consumer or writes to a file. -Keep on mind that, all methods can receive `any` type of variables, and serializing them for the -output is up to the logger implementation. `JSON.stringify` might not be the best option for all -cases. + + Read more about implementing your own writers in the [Hive Logger documentation](/docs/logger). + -Also please notice that `debug` can receive functions that will be invoked only if the log level is -enabled. +#### Daily File Log Writer (Node.js Only) -Here is an example of a custom logger implementation, that logs to the console. But keep in mind -that this is a very basic example and you shouldn't use it in production directly! +Here is an example of a custom log writer that writes logs to a daily log file. It will write to a +file for each day in a given directory. -```ts -import { Logger } from '@graphql-hive/gateway' +```ts filename="daily-file-log-writer.ts" +import fs from 'node:fs/promises' +import path from 'node:path' +import { Attributes, jsonStringify, LogLevel, LogWriter } from '@graphql-hive/logger' -class CustomLogger implements Logger { +export class DailyFileLogWriter implements LogWriter { constructor( - public name: string, - public meta: Record, - public isDebugEnabled: boolean + private dir: string, + private name: string ) {} - - log(...args: any[]): void { - console.log(this.name, this.meta, ...args) + write(level: LogLevel, attrs: Attributes | null | undefined, msg: string | null | undefined) { + const date = new Date().toISOString().split('T')[0] + const logfile = path.resolve(this.dir, `${this.name}_${date}.log`) + return fs.appendFile(logfile, jsonStringify({ level, msg, attrs })) } +} +``` - error(...args: any[]): void { - console.error(this.name, this.meta, ...args) - } +And using it as simple as pluging it into an instance of Hive Logger to the `logging` option: - warn(...args: any[]): void { - console.warn(this.name, this.meta, ...args) - } +```ts filename="gateway.config.ts" +import { defineConfig, JSONLogWriter, Logger } from '@graphql-hive/gateway' +import { DailyFileLogWriter } from './daily-file-log-writer' - info(...args: any[]): void { - console.info(this.name, this.meta, ...args) - } +export const gatewayConfig = defineConfig({ + logging: new Logger({ + // you can combine multiple writers to log to different places + writers: [ + // this will log to the console in JSON format + new JSONLogWriter(), + // and this is our daily file writer + new DailyFileLogWriter('/var/log/hive', 'gateway') + ] + }) +}) +``` - debug(...lazyMessageArgs: (() => any | any)[]): void { - if (this.isDebugEnabled) { - console.debug( - this.name, - this.meta, - ...lazyMessageArgs.map(arg => (typeof arg === 'function' ? arg() : arg)) - ) - } - } +#### Pino (Node.js Only) - child(nameOrMeta: string | Record): Logger { - let newName: string - let newMeta: Record - if (typeof nameOrMeta === 'string') { - newName = `${this.name}.${nameOrMeta}` - newMeta = this.meta - } else { - newName = this.name - newMeta = { ...this.meta, ...nameOrMeta } - } - return new CustomLogger(newName, newMeta) +Use the [Node.js `pino` logger library](https://github.com/pinojs/pino) for writing Hive Logger's +logs. + +`pino` is an optional peer dependency, so you must install it first. + +```sh npm2yarn +npm i pino pino-pretty +``` + +Since we're using a custom log writter, you have to install the Hive Logger package too: + +```sh npm2yarn +npm i @graphql-hive/logger +``` + +```ts filename="gateway.config.ts" +import pino from 'pino' +import { defineConfig } from '@graphql-hive/gateway' +import { Logger } from '@graphql-hive/logger' +import { PinoLogWriter } from '@graphql-hive/logger/writers/pino' + +const pinoLogger = pino({ + transport: { + target: 'pino-pretty' } -} +}) + +export const gatewayConfig = defineConfig({ + logging: new Logger({ + writers: [new PinoLogWriter(pinoLogger)] + }) +}) +``` + +#### Winston (Node.js Only) + +Use the [Node.js `winston` logger library](https://github.com/winstonjs/winston) for writing Hive +Logger's logs. + +`winston` is an optional peer dependency, so you must install it first. + +```sh +npm i winston +``` + +Since we're using a custom log writter, you have to install the Hive Logger package too: + +```sh npm2yarn +npm i @graphql-hive/logger +``` + +```ts filename="gateway.config.ts" +import { createLogger, format, transports } from 'winston' +import { defineConfig } from '@graphql-hive/gateway' +import { Logger } from '@graphql-hive/logger' +import { WinstonLogWriter } from '@graphql-hive/logger/writers/winston' + +const winstonLogger = createLogger({ + level: 'info', + format: format.combine(format.timestamp(), format.json()), + transports: [new transports.Console()] +}) + +export const gatewayConfig = defineConfig({ + logging: new Logger({ + writers: [new WinstonLogWriter(winstonLogger)] + }) +}) ``` ## Error Handling diff --git a/packages/web/docs/src/content/gateway/monitoring-tracing.mdx b/packages/web/docs/src/content/gateway/monitoring-tracing.mdx index 107b0777eb..c5fc1e652d 100644 --- a/packages/web/docs/src/content/gateway/monitoring-tracing.mdx +++ b/packages/web/docs/src/content/gateway/monitoring-tracing.mdx @@ -123,110 +123,141 @@ The following are available to use with this plugin: - Upstream HTTP calls: tracks the outgoing HTTP requests made by the GraphQL execution. - Context propagation: propagates the trace context between the incoming HTTP request and the outgoing HTTP requests. +- Custom Span and attributes: Add your own business spans and attributes from your own plugin. +- Logs and Traces correlation: Rely on stanadard OTEL shared context to correlate logs and traces ![image](https://github.com/user-attachments/assets/74918ade-8d7c-44ee-89b2-e10a13ffc4ad) -### Usage Example +### OpenTelemetry Setup - +For the OpenTelemetry tracing feature to work, OpenTelemetry JS API must be setup. - +We recommend to place your OpenTelemetry setup in a `telemetry.ts` file that will be your first +import in your `gateway.config.ts` file. This allow instrumentations to be registered (if any) +before any other packages are imported. -```ts filename="gateway.config.ts" -import { createStdoutExporter, defineConfig } from '@graphql-hive/gateway' +For ease of configuration, we provide a `openTelemetrySetup` function from +`@graphql-hive/plugin-opentelemetry/setup` module, with sensible default and straightforward API +compatible with all runtimes. -export const gatewayConfig = defineConfig({ - openTelemetry: { - exporters: [ - // A simple output to the console. - // You can add more exporters here, please see documentation below for more examples. - createStdoutExporter() - ], - serviceName: 'my-custom-service-name', // Optional, the name of your service - tracer: myCustomTracer, // Optional, a custom tracer to use - inheritContext: true, // Optional, whether to inherit the context from the incoming request - propagateContext: true, // Optional, whether to propagate the context to the outgoing requests - // Optional config to customize the spans. By default all spans are enabled. - spans: { - http: true, // Whether to track the HTTP request/response - graphqlParse: true, // Whether to track the GraphQL parse phase - graphqlValidate: true, // Whether to track the GraphQL validate phase - graphqlExecute: true, // Whether to track the GraphQL execute phase - subgraphExecute: true, // Whether to track the subgraph execution phase - upstreamFetch: true // Whether to track the upstream HTTP requests - } +But this utility is not mandatory, you can use any setup relevant to your specific use case and +infrastrcture. + +The most commonnly used otel packages are available when using Hive Gateway with CLI. Please switch +to programatic usage if you need more packages. + +Please refer to [`opentelemetry-js` documentation](https://opentelemetry.io/docs/languages/js/) for +more details about OpenTelemetry setup and API. + +#### Basic usage + +Hive Gateway openTelemetrySetup() (recommended),
OpenTelemetry NodeSDK
, "CLI"]}> + + + +This configuration API still rely on offical `@opentelemetry/api` package, which means you can use +any official or standard compliant packages with it. + +```ts filename="telemetry.ts" +import { openTelemetrySetup } from '@graphql-hive/plugin-opentelemetry/setup' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' +import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http' + +openTelemetrySetup({ + // Mandatory: It depends on the available API in your runtime. + // We recommend AsyncLocalStorage based manager when possible. + // `@opentelemetry/context-zone` is also available for other runtimes. + // Pass `flase` to disable context manager usage. + contextManager: new AsyncLocalStorageContextManager(), + + traces: { + // Define your exporter, most of the time the OTLP HTTP one. Traces are batched by default. + exporter: new OTLPTraceExporter({ url: process.env['OTLP_URL'] }), + + // You can easily enable a console exporter for quick debug + console: process.env['DEBUG_TRACES'] === '1' } }) ``` - +
-```sh npm2yarn -npm i @graphql-mesh/plugin-opentelemetry -``` + + Official OpenTelemetry Node SDK is only working when Hive Gateway is used via the CLI or + programatically with a Node runtime. + -```ts filename="index.ts" -import { createGatewayRuntime } from '@graphql-hive/gateway-runtime' -import { createStdoutExporter, useOpenTelemetry } from '@graphql-mesh/plugin-opentelemetry' +OpenTelemetry provides an official SDK for Node (`@opentelemetry/sdk-node`). This SDK offers a +standard API compatible with OTEL SDK specification. -export const gateway = createGatewayRuntime({ - plugins: ctx => [ - useOpenTelemetry({ - ...ctx, - exporters: [createStdoutExporter()], - serviceName: 'my-custom-service-name', // Optional, the name of your service - tracer: myCustomTracer, // Optional, a custom tracer to use - inheritContext: true, // Optional, whether to inherit the context from the incoming request - propagateContext: true, // Optional, whether to propagate the context to the outgoing requests - // Optional config to customize the spans. By default all spans are enabled. - spans: { - http: true, // Whether to track the HTTP request/response - graphqlParse: true, // Whether to track the GraphQL parse phase - graphqlValidate: true, // Whether to track the GraphQL validate phase - graphqlExecute: true, // Whether to track the GraphQL execute phase - subgraphExecute: true, // Whether to track the subgraph execution phase - upstreamFetch: true // Whether to track the upstream HTTP requests - } - }) - ] -}) +It ships with a lot of features, most of them being configurable via environment variables. + +The most commonnly used otel packages are available when using Hive Gateway with CLI, which means +you can follow official `@opentelemetry/sdk-node` documentation for your setup. Please switch to +programatic usage if you need more packages. + +```ts filename="telemetry.ts" +import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http' +import { NodeSDK, resources, tracing } from '@opentelemetry/sdk-node' + +new NodeSDK({ + // All configuration is optional. OTEL rely on env variables or sensible default value. + + // Defines the exporter, HTTP OTLP most of the time. Traces are batched by default + traceExporter: new OTLPTraceExporter({ url: process.env['OTLP_URL'] }), + + // Optional, enables automatic instrumentation, adding traces like network spans. + instrumentations: getNodeAutoInstrumentations(), + + // Optional, enables automatic ressource attributes detection + resourceDetectors: getResourceDetectors() +}).start() ``` -
+ -### Exporters +If your use case is simple enough, you can use CLI options to setup OpenTelemetry. -You may use one of the following exporters to send the traces to a backend, or create an configure -custom exporters and processors. +```bash +hive-gateway supergraph supergraph.graphql \ + --opentelemetry "http://localhost:4318" +``` + +By default, an HTTP OTLP exporter will be used, but you can change it with +`--opentelemetry-exporter-type`: -To use a custom exporter that is not listen below, please refer to -[Customer Exporters in OpenTelemetry documentation](https://opentelemetry.io/docs/languages/js/exporters/#custom-exporters). +```bash +hive-gateway supergraph supergraph.graphql \ + --opentelemetry "http://localhost:4317" \ + --opentelemetry-exporter-type otlp-grpc +``` -In addition, you can fully customize the plugin's Tracer with any kind of OpenTelemetry -[tracer](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/api.md#tracer), -and integrate it to any tracing/metric platform that supports this standard. +Please refer to `openTelemetrySetup()` usage if you need more control and options. - + -{/* Stdout */} + - +#### Service name and version -A simple exporter that writes the spans to the `stdout` of the process. +You can provide a service name, either by using standard `OTEL_SERVICE_NAME` and +`OTEL_SERVICE_VERSION` or by providing them programatically via setup options - +Hive Gateway openTelemetrySetup() (recommended),
OpenTelemetry NodeSDK
]}> -```ts filename="gateway.config.ts" -import { createStdoutExporter, defineConfig } from '@graphql-hive/gateway' -export const gatewayConfig = defineConfig({ - openTelemetry: { - exporters: [createStdoutExporter()] +```ts filename="telemetry.ts" +import { openTelemetrySetup } from '@graphql-hive/plugin-opentelemetry/setup' + +openTelemetrySetup({ + resource: { + serviceName: 'my-service', + serviceVersion: '1.0.0' } }) ``` @@ -235,55 +266,42 @@ export const gatewayConfig = defineConfig({ - - Beware that OpenTelemetry JavaScript SDK writes spans using `console.dir`. Meaning, - serverless/on-the-edge environments that don't support `console.dir` (like [Cloudflare - Workers](https://developers.cloudflare.com/workers/runtime-apis/console/)) wont show any logs. - - -```ts filename="index.ts" -import { createGatewayRuntime } from '@graphql-hive/gateway-runtime' -import { createStdoutExporter, useOpenTelemetry } from '@graphql-mesh/plugin-opentelemetry' +```ts filename="telemetry.ts" +import { NodeSDK, resources } from '@opentelemetry/sdk-node' -export const gateway = createGatewayRuntime({ - plugins: ctx => [ - useOpenTelemetry({ - ...ctx, - exporters: [createStdoutExporter()] - }) - ] -}) +new NodeSDK({ + resource: resources.resourceFromAttributes({ + 'service.name': 'my-service', + 'service.version': '1.0.0' + }) +}).start() ```
-
+#### Custom resource attributes -{/* OTLP (HTTP) */} +Resource attributes can be defined by providing a `Resource` instance to the setup `resource` +option. - +Hive Gateway openTelemetrySetup() (recommended),
OpenTelemetry NodeSDK
]}> -An exporter that writes the spans to an OTLP-supported backend using HTTP. + - +This resource will be merged with the resource created from env variables, which means +`service.name` and `service.version` are not mandatory if already provided through environment +variables. - -```ts filename="gateway.config.ts" -import { createOtlpHttpExporter, defineConfig } from '@graphql-hive/gateway' +```ts filename="telemetry.ts" +import { openTelemetrySetup } from '@graphql-hive/plugin-opentelemetry/setup' +import { resourceFromAttributes } from '@opentelemetry/resources' -export const gatewayConfig = defineConfig({ - openTelemetry: { - exporters: [ - createOtlpHttpExporter({ - url: 'http://:4318' - // ... - // additional options to pass to @opentelemetry/exporter-trace-otlp-http - // https://www.npmjs.com/package/@opentelemetry/exporter-trace-otlp-http - }) - ] - } +openTelemetrySetup({ + resource: resourceFromAttributes({ + 'custom.attribute': 'my custom value' + }) }) ``` @@ -291,56 +309,78 @@ export const gatewayConfig = defineConfig({ -```ts filename="index.ts" -import { createGatewayRuntime } from '@graphql-hive/gateway-runtime' -import { createOtlpHttpExporter, useOpenTelemetry } from '@graphql-mesh/plugin-opentelemetry' +```ts filename="telemetry.ts" +import { NodeSDK, resources } from '@opentelemetry/sdk-node' -export const gateway = createGatewayRuntime({ - plugins: ctx => [ - useOpenTelemetry({ - ...ctx, - exporters: [ - createOtlpHttpExporter({ - url: 'http://:4318' - // ... - // additional options to pass to @opentelemetry/exporter-trace-otlp-http - // https://www.npmjs.com/package/@opentelemetry/exporter-trace-otlp-http - }) - ] - }) - ] -}) +new NodeSDK({ + resource: resources.resourceFromAttributes({ + 'service.name': 'my-service', + 'service.version': '1.0.0' + }) +}).start() ``` - +#### Trace Exporter, Span Processors and Tracer Provider -{/* OTLP (gRPC) */} +Exporters are responsible of storing the traces recorded by OpenTelemetry. There is a large existing +range of exporters, Hive Gateway is compatible with any exporter using `@opentelemetry/api` standard +OpenTelemetry implementation. - +Span Processors are responsible of processing recorded spans before they are stored. They genreally +take an exporter in paramater, which is used to store processed spans. -An exporter that writes the spans to an OTLP-supported backend using gRPC. +Tracer Provider is responsible of creating Tracers that will be used to record spans. - +You can setup OpenTelemetry by providing either: + +- a Trace Exporter. A Span processor and a Tracer Provider will be created for you, with sensible + poduction defaults like trace batching. +- a list of Span Processors. This gives you more control, and allows to define more than one + exporter. The Tracer Provider will be created for you. +- a Tracer Provider. This is the manual setup mode where nothing is created automatically. The + Tracer Provider will just be registered. + +Hive Gateway openTelemetrySetup() (recommended),
OpenTelemetry NodeSDK
]}> -```ts filename="gateway.config.ts" -import { createOtlpGrpcExporter, defineConfig } from '@graphql-hive/gateway' -export const gatewayConfig = defineConfig({ - openTelemetry: { - exporters: [ - createOtlpGrpcExporter({ - url: 'http://:4317' - // ... - // additional options to pass to @opentelemetry/exporter-trace-otlp-grpc - // https://www.npmjs.com/package/@opentelemetry/exporter-trace-otlp-grpc - }) - ] - } +```ts filename="telemetry.ts" +import { openTelemetrySetup } from '@graphql-hive/plugin-opentelemetry/setup' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' + +openTelemetrySetup({ + contextManager: new AsyncLocalStorageContextManager(), + traces: { + // Define your exporter, most of the time the OTLP HTTP one. Traces are batched by default. + exporter: ..., + + // To ease debug, you can also add a non-batched console exporter easily with `console` option + console: true, + }, +}) + +// or + +openTelemetrySetup({ + contextManager: new AsyncLocalStorageContextManager(), + traces: { + // Define your span processors. + processors: [...], + }, +}) + +// or + +openTelemetrySetup({ + contextManager: new AsyncLocalStorageContextManager(), + traces: { + // Define your span processors. + tracerProvider: ..., + }, }) ``` @@ -348,52 +388,64 @@ export const gatewayConfig = defineConfig({ -```ts filename="index.ts" -import { createGatewayRuntime } from '@graphql-hive/gateway-runtime' -import { createOtlpGrpcExporter, useOpenTelemetry } from '@graphql-mesh/plugin-opentelemetry' +```ts filename="telemetry.ts" +import { NodeSDK } from '@opentelemetry/sdk-node' -export const gateway = createGatewayRuntime({ - plugins: ctx => [ - useOpenTelemetry({ - ...ctx, - exporters: [ - createOtlpGrpcExporter({ - url: 'http://:4317' - // ... - // additional options to pass to @opentelemetry/exporter-trace-otlp-grpc - // https://www.npmjs.com/package/@opentelemetry/exporter-trace-otlp-grpc - }) - ] - }) - ] -}) +new NodeSDK({ + // Define your exporter, most of the time the OTLP HTTP one. Traces are batched by default. + traceExporter: ..., +}).start() + +// or + +new NodeSDK({ + // Define your processors + spanProcessors: [...], +}).start() +``` + +OpenTelemetry's `NodeSDK` doesn't allow to manually porvide a Tracer Provider. You have to register +it seperatly. + +```ts filename="telemetry.ts" +import { trace } from '@opentelemetry/api' +import { NodeSDK } from '@opentelemetry/sdk-node' + +// Manually set the Tracer Provider, NodeSDK will detect that it is already registered +trace.setGlobalTracerProvider(...) + +new NodeSDK({ + //... +}).start() ```
-
-{/* Jaeger */} +Hive Gateway CLI embeds every official OpenTelemetry exporters. Please switch manual deployement or +programtice usage to install a non-official exporter. + + -[Jaeger](https://www.jaegertracing.io/) supports [OTLP over HTTP/gRPC](#otlp-over-http), so you can -use it by pointing the `createOtlpHttpExporter`/`createOtlpGrpcExporter` to the Jaeger endpoint: +A simple exporter that writes the spans to the `stdout` of the process. It is mostly used for +debugging purpose. - +[See official documentation for more details](https://open-telemetry.github.io/opentelemetry-js/classes/_opentelemetry_sdk-trace-base.ConsoleSpanExporter.html). + +Hive Gateway openTelemetrySetup() (recommended),
OpenTelemetry NodeSDK
]}> -```ts filename="gateway.config.ts" -import { createOtlpHttpExporter, defineConfig } from '@graphql-hive/gateway' -export const gatewayConfig = defineConfig({ - openTelemetry: { - exporters: [ - createOtlpHttpExporter({ - url: 'http://:4318' - }) - ] +```ts filename="telemetry.ts" +import { openTelemetrySetup } from '@graphql-hive/plugin-opentelemetry/setup' + +openTelemetrySetup({ + contextManager: new AsyncLocalStorageContextManager(), + traces: { + console: true } }) ``` @@ -402,68 +454,38 @@ export const gatewayConfig = defineConfig({ -```ts filename="index.ts" -import { createGatewayRuntime } from '@graphql-hive/gateway-runtime' -import { createOtlpHttpExporter, useOpenTelemetry } from '@graphql-mesh/plugin-opentelemetry' +```ts filename="telemetry.ts" +import { NodeSDK, tracing } from '@opentelemetry/sdk-node' -export const gateway = createGatewayRuntime({ - plugins: ctx => [ - useOpenTelemetry({ - ...ctx, - exporters: [ - createOtlpHttpExporter({ - url: 'http://:4318' - }) - ] - }) - ] -}) +new NodeSDK({ + // Use `spanProcessors` instead of `traceExporter` to avoid the default batching configuration + spanProcessors: [new tracing.SimpleSpanProcessor(new tracing.ConsoleSpanExporter())] +}).start() ```
- - Your Jaeger instance needs to have OTLP ingestion enabeld, so verify that you have the - `COLLECTOR_OTLP_ENABLED=true` environment variable set, and that ports `4317` and `4318` are - acessible. - - -To test this integration, you can run a local Jaeger instance using Docker: - -``` -docker run -d --name jaeger \ - -e COLLECTOR_OTLP_ENABLED=true \ - -p 5778:5778 \ - -p 16686:16686 \ - -p 4317:4317 \ - -p 4318:4318 \ - jaegertracing/all-in-one:latest -``` -
-{/* NewRelic */} - -[NewRelic](https://newrelic.com/) supports [OTLP over HTTP/gRPC](#otlp-over-http), so you can use it -by configuring the `createOtlpHttpExporter`/`createOtlpGrpcExporter` to the NewRelic endpoint: +An exporter that writes the spans to an OTLP-supported backend using HTTP. - +Hive Gateway openTelemetrySetup() (recommended),
OpenTelemetry NodeSDK
]}> -```ts filename="gateway.config.ts" -import { createOtlpHttpExporter, defineConfig } from '@graphql-hive/gateway' -export const gatewayConfig = defineConfig({ - openTelemetry: { - exporters: [ - createOtlpHttpExporter({ - url: 'http://:4318' - }) - ] +```ts filename="telemetry.ts" +import { openTelemetrySetup } from '@graphql-hive/plugin-opentelemetry/setup' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' +import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http' + +openTelemetrySetup({ + contextManager: new AsyncLocalStorageContextManager(), + traces: { + exporter: new OTLPTraceExporter({ url: 'http://:4318' }) } }) ``` @@ -472,55 +494,38 @@ export const gatewayConfig = defineConfig({ -```ts filename="index.ts" -import { createGatewayRuntime } from '@graphql-hive/gateway-runtime' -import { createOtlpHttpExporter, useOpenTelemetry } from '@graphql-mesh/plugin-opentelemetry' +```ts filename="telemetry.ts" +import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http' +import { NodeSDK } from '@opentelemetry/sdk-node' -export const gateway = createGatewayRuntime({ - plugins: ctx => [ - useOpenTelemetry({ - ...ctx, - exporters: [ - createOtlpHttpExporter({ - url: 'http://:4318' - }) - ] - }) - ] -}) +new NodeSDK({ + traceExporter: new OTLPTraceExporter({ url: 'http://:4318' }) +}).start() ```
- - For additional information and NewRelic ingestion endpoints, see [**New Relic OTLP - endpoint**](https://docs.newrelic.com/docs/opentelemetry/best-practices/opentelemetry-otlp/). - -
-{/* Datadog */} - -[DataDog Agent](https://docs.datadoghq.com/agent/) supports [OTLP over HTTP/gRPC](#otlp-over-http), -so you can use it by pointing the `createOtlpHttpExporter` to the DataDog Agent endpoint: +An exporter that writes the spans to an OTLP-supported backend using gRPC. - +Hive Gateway openTelemetrySetup() (recommended),
OpenTelemetry NodeSDK
]}> -```ts filename="gateway.config.ts" -import { createOtlpHttpExporter, defineConfig } from '@graphql-hive/gateway' -export const gatewayConfig = defineConfig({ - openTelemetry: { - exporters: [ - createOtlpHttpExporter({ - url: 'http://:4318' - }) - ] +```ts filename="telemetry.ts" +import { openTelemetrySetup } from '@graphql-hive/plugin-opentelemetry/setup' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' +import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-grpc' + +openTelemetrySetup({ + contextManager: new AsyncLocalStorageContextManager(), + traces: { + exporter: new OTLPTraceExporter({ url: 'http://:4317' }) } }) ``` @@ -529,59 +534,45 @@ export const gatewayConfig = defineConfig({ -```ts filename="index.ts" -import { createGatewayRuntime } from '@graphql-hive/gateway-runtime' -import { createOtlpHttpExporter, useOpenTelemetry } from '@graphql-mesh/plugin-opentelemetry' +```ts filename="telemetry.ts" +import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-grpc' +import { NodeSDK } from '@opentelemetry/sdk-node' -export const gateway = createGatewayRuntime({ - plugins: ctx => [ - useOpenTelemetry({ - ...ctx, - exporters: [ - createOtlpHttpExporter({ - url: 'http://:4318' - }) - ] - }) - ] -}) +new NodeSDK({ + traceExporter: new OTLPTraceExporter({ url: 'http://:4317' }) +}).start() ```
- - For additional information, see [**OpenTelemetry in - Datadog**](https://docs.datadoghq.com/opentelemetry/interoperability/otlp_ingest_in_the_agent/?tab=host#enabling-otlp-ingestion-on-the-datadog-agent). - -
-{/* Zipkin */} - -[Zipkin](https://zipkin.io/) is using a custom protocol to send the spans, so you can use the Zipkin -exporter to send the spans to a Zipkin backend: +[Jaeger](https://www.jaegertracing.io/) supports [OTLP over HTTP/gRPC](#otlp-over-http), so you can +use it by pointing the +`@opentelemetry/exporter-trace-otlp-http`/`@opentelemetry/exporter-trace-otlp-grpc` to the Jaeger +endpoint. - +Your Jaeger instance needs to have OTLP ingestion enabeld, so verify that you have the +`COLLECTOR_OTLP_ENABLED=true` environment variable set, and that ports `4317` and `4318` are +acessible. + +Hive Gateway openTelemetrySetup() (recommended),
OpenTelemetry NodeSDK
]}> -```ts filename="gateway.config.ts" -import { createZipkinExporter, defineConfig } from '@graphql-hive/gateway' +```ts filename="telemetry.ts" +import { openTelemetrySetup } from '@graphql-hive/plugin-opentelemetry/setup' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' +import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http' -export const gatewayConfig = defineConfig({ - openTelemetry: { - exporters: [ - createZipkinExporter({ - url: 'http://:9411/api/v2/spans' - // ... - // additional options to pass to @opentelemetry/exporter-zipkin - // https://www.npmjs.com/package/@opentelemetry/exporter-zipkin - }) - ] +openTelemetrySetup({ + contextManager: new AsyncLocalStorageContextManager(), + traces: { + exporter: new OTLPTraceExporter({ url: 'http://:4318' }) } }) ``` @@ -590,25 +581,13 @@ export const gatewayConfig = defineConfig({ -```ts filename="index.ts" -import { createGatewayRuntime } from '@graphql-hive/gateway-runtime' -import { createZipkinExporter, useOpenTelemetry } from '@graphql-mesh/plugin-opentelemetry' +```ts filename="telemetry.ts" +import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http' +import { NodeSDK } from '@opentelemetry/sdk-node' -export const gateway = createGatewayRuntime({ - plugins: ctx => [ - useOpenTelemetry({ - ...ctx, - exporters: [ - createZipkinExporter({ - url: 'http://:9411/api/v2/spans' - // ... - // additional options to pass to @opentelemetry/exporter-zipkin - // https://www.npmjs.com/package/@opentelemetry/exporter-zipkin - }) - ] - }) - ] -}) +new NodeSDK({ + traceExporter: new OTLPTraceExporter({ url: 'http://:4318' }) +}).start() ``` @@ -617,58 +596,652 @@ export const gateway = createGatewayRuntime({ -
+ -### Batching +[NewRelic](https://newrelic.com/) supports [OTLP over HTTP/gRPC](#otlp-over-http), so you can use it +by configuring the +`@opentelemetry/exporter-trace-otlp-http`/`@opentelemetry/exporter-trace-otlp-grpc` to the NewRelic +endpoint: -All built-in processors allow you to configure batching options by an additional argument to the -factory function. +Please refer to the +[NewRelic OTLP documentation](https://docs.newrelic.com/docs/opentelemetry/best-practices/opentelemetry-otlp/#configure-endpoint-port-protocol) +for complete documentation and to find the apropriate endpoint. -The following configuration are allowed: +Hive Gateway openTelemetrySetup() (recommended),
OpenTelemetry NodeSDK
]}> -- `true` (default): enables batching and use - [`BatchSpanProcessor`](https://opentelemetry.io/docs/specs/otel/trace/sdk/#batching-processor) - default config. -- `object`: enables batching and use - [`BatchSpanProcessor`](https://opentelemetry.io/docs/specs/otel/trace/sdk/#batching-processor) - with the provided configuration. -- `false` - disables batching and use - [`SimpleSpanProcessor`](https://opentelemetry.io/docs/specs/otel/trace/sdk/#simple-processor) + -By default, the batch processor will send the spans every 5 seconds or when the buffer is full. +```ts filename="telemetry.ts" +import { openTelemetrySetup } from '@graphql-hive/plugin-opentelemetry/setup' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' +import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http' + +openTelemetrySetup({ + contextManager: new AsyncLocalStorageContextManager(), + traces: { + exporter: new OTLPTraceExporter({ + url: 'https://otlp.nr-data.net', // For US users, or https://otlp.eu01.nr-data.net for EU users + headers: { 'api-key': '' }, + compression: 'gzip' // Compression is recommended by NewRelic + }), + batching: { + // Depending on your traces size and network quality, you will probably need to tweak batching + // configuration. A batch should not be larger than 1Mo. + } + } +}) +``` -```json -{ - "scheduledDelayMillis": 5000, - "maxQueueSize": 2048, - "exportTimeoutMillis": 30000, - "maxExportBatchSize": 512 -} + + + + +```ts filename="telemetry.ts" +import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http' +import { NodeSDK } from '@opentelemetry/sdk-node' + +new NodeSDK({ + traceExporter: new OTLPTraceExporter({ + url: 'https://otlp.nr-data.net', // For US users, or https://otlp.eu01.nr-data.net for EU users + headers: { 'api-key': '' }, + compression: 'gzip' // Compression is recommended by NewRelic + }) +}).start() ``` -You can learn more about the batching options in the -[Picking the right span processor](https://opentelemetry.io/docs/languages/js/instrumentation/#picking-the-right-span-processor) -page. + -### Reported Spans +
-The plugin exports OpenTelemetry spans for the following operations: +
-
+ -HTTP Server +[DataDog Agent](https://docs.datadoghq.com/agent/) supports [OTLP over HTTP/gRPC](#otlp-over-http), +so you can use it by pointing the `@opentelemetry/exporter-trace-otlp-http` to the DataDog Agent +endpoint - - This span is created for each incoming HTTP request, and acts as a root span for the entire - request. Disabling this span will also disable the other hooks and spans. - +You can also use the official DataDog Tracer Provider by using manual Hive Gateway deployement and +installing the dependency. -By default, the plugin will a root span for the HTTP layer as a span (`METHOD /path`) with the -following attributes for the HTTP request: +Hive Gateway openTelemetrySetup() (recommended),
OpenTelemetry NodeSDK
]}> -- `http.method`: The HTTP method -- `http.url`: The HTTP URL -- `http.route`: The HTTP status code + + +The official DataDog's `TracerProvider` is the recommended approach, because it enable and sets up +the correlation with DataDog APM spans. + +```ts filename="telemetry.ts" +import ddTrace from 'dd-trace' +import { openTelemetrySetup } from '@graphql-hive/plugin-opentelemetry/setup' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' + +const { TracerProvider } = ddTrace.init({ + // Your configuration +}) + +openTelemetrySetup({ + contextManager: null, // Dont' register a context manager, DataDog Agent registers its own. + traces: { + tracerProvider: new TracerProvider() + } +}) +``` + + + + + +It is possible to not use DataDog Agent if you want to only use DataDog as a tracing backend. + +DataDog is compatible with standard OTLP over HTTP export format. + +```ts filename="telemetry.ts" +import { openTelemetrySetup } from '@graphql-hive/plugin-opentelemetry/setup' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' +import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http' + +openTelemetrySetup({ + contextManager: new AsyncLocalStorageContextManager(), + traces: { + exporter: new OTLPTraceExporter({ + url: 'http://:4318' + }) + } +}) +``` + + + + + +It is possible to not use DataDog Agent if you want to only use DataDog as a tracing backend. + +DataDog is compatible with standard OTLP over HTTP export format. + +```ts filename="telemetry.ts" +import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http' +import { NodeSDK } from '@opentelemetry/sdk-node' + +new NodeSDK({ + traceExporter: new OTLPTraceExporter({ + url: 'http://:4318' + }) +}).start() +``` + + + +
+ +
+ + + +[Zipkin](https://zipkin.io/) is using a custom protocol to send the spans, so you can use the Zipkin +exporter to send the spans to a Zipkin backend. + +Hive Gateway openTelemetrySetup() (recommended),
OpenTelemetry NodeSDK
]}> + + + +```ts filename="telemetry.ts" +import { openTelemetrySetup } from '@graphql-hive/plugin-opentelemetry/setup' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' +import { ZipkinExporter } from '@opentelemetry/exporter-zipkin' + +openTelemetrySetup({ + contextManager: new AsyncLocalStorageContextManager(), + traces: { + exporter: new ZipkinExporter({ + url: '' + }) + } +}) +``` + + + + + +```ts filename="telemetry.ts" +import { ZipkinExporter } from '@opentelemetry/exporter-zipkin' +import { NodeSDK } from '@opentelemetry/sdk-node' + +new NodeSDK({ + traceExporter: new ZipkinExporter({ + url: '' + }) +}).start() +``` + + + + + +```ts filename="telemetry.ts" +import ddTrace from 'dd-trace' +import { openTelemetrySetup } from '@graphql-hive/plugin-opentelemetry/setup' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' + +const { TracerProvider } = ddTrace.init({ + // Your configuration +}) + +openTelemetrySetup({ + contextManager: new AsyncLocalStorageContextManager(), + traces: { + tracerProvider: new TracerProvider() + } +}) +``` + + + +
+ +
+ + + +#### Context Propagation + +By default, Hive Gateway will +[propagate the trace context](https://opentelemetry.io/docs/concepts/context-propagation/) between +the incoming HTTP request and the outgoing HTTP requests using standard Baggage and Trace Context +propagators. + +You can configure the list of propagators that will be used. All official propagators are bundled +with Hive Gateway CLI. To use other non-official propagators, please switch to manual deployement. + +You will also have to pick a Context Manager. It will be responsible to keep track of the current +OpenTelemetry Context at any point of program. We recommend using the official +`AsyncLocalStorageContextManager` from `@opentelemetry/context-async-hooks` when `AsyncLocalStorage` +API is available. In other cases, you can either try `@opentelemetry/context-zone`, or pass `null` +to not use any context manager. + +If no Context Manager compatible with async is registered, automatic parenting of custom spans will +not work. You will have to retreive the current OpenTelemetry context from the GraphQL context, or +from the `getOtelContext` method of the plugin instance. + +Hive Gateway openTelemetrySetup() (recommended),
OpenTelemetry NodeSDK
]}> + + + +```ts filename="telemetry.ts" +import { openTelemetrySetup } from '@graphql-hive/plugin-opentelemetry/setup' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' +import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-grpc' +import { B3Propagator } from '@opentelemetry/propagator-b3' + +openTelemetrySetup({ + contextManager: new AsyncLocalStorageContextManager(), + traces: { + exporter: new OTLPTraceExporter({ url: 'http://:4317' }) + }, + propagators: [new B3Propagator()] +}) +``` + + + + + +```ts filename="telemetry.ts" +import { ZipkinExporter } from '@opentelemetry/exporter-zipkin' +import { B3Propagator } from '@opentelemetry/propagator-b3' +import { NodeSDK } from '@opentelemetry/sdk-node' + +new NodeSDK({ + traceExporter: new OTLPTraceExporter({ url: 'http://:4317' }), + textMapPropagator: new B3Propagator() +}).start() +``` + + + +
+ +#### Span Batching + +By default, if you provide only a Trace Exporter, it will be wrapped into a `BatchSpanProcessor` to +batch spans together and reduce the number of request to you backend. + +This is an important feature for a real world production environment, and you can configure its +behaviour to exactly suites your infrastructure limits. + +By default, the batch processor will send the spans every 5 seconds or when the buffer is full. + +Hive Gateway openTelemetrySetup() (recommended),
OpenTelemetry NodeSDK
]}> + + + +The following configuration are allowed: + +- `true` (default): enables batching and use + [`BatchSpanProcessor`](https://opentelemetry.io/docs/specs/otel/trace/sdk/#batching-processor) + default config. +- `object`: enables batching and use + [`BatchSpanProcessor`](https://opentelemetry.io/docs/specs/otel/trace/sdk/#batching-processor) + with the provided configuration. +- `false` - disables batching and use + [`SimpleSpanProcessor`](https://opentelemetry.io/docs/specs/otel/trace/sdk/#simple-processor) + +```ts filename="telemetry.ts" +import { openTelemetrySetup } from '@graphql-hive/plugin-opentelemetry/setup' + +openTelemetrySetup({ + traces: { + exporter: ..., + batching: { + exportTimeoutMillis: 30_000, // Default to 30_000ms + maxExportBatchSize: 512, // Default to 512 spans + maxQueueSize: 2048, // Default to 2048 spans + scheduledDelayMillis: 5_000, // Default to 5_000ms + } + }, +}) +``` + + + + + +```ts filename="telemetry.ts" +import { NodeSDK, tracing } from '@opentelemetry/sdk-node' + +const exporter = ... + +new NodeSDK({ + spanProcessors: [ + new tracing.BatchSpanProcessor( + exporter, + { + exportTimeoutMillis: 30_000, // Default to 30_000ms + maxExportBatchSize: 512, // Default to 512 spans + maxQueueSize: 2048, // Default to 2048 spans + scheduledDelayMillis: 5_000, // Default to 5_000ms + }, + ), + ], +}).start() +``` + + + +
+ +You can learn more about the batching options in the +[Picking the right span processor](https://opentelemetry.io/docs/languages/js/instrumentation/#picking-the-right-span-processor) +page. + +#### Sampling + +When your gateway have a lot of traffic, tracing every requests can become a very expensive +approach. + +A mitigation for this problem is to trace only some requests, using a strategy to choose which +request to trace or not. + +The most common strategy is to combine both a parent first (a span is picked if parent is picked) +and a ratio based on trace id (each trace, one by request, have a chance to be picked, with a given +rate). + +By default, all requests are traced. You can either provide you own Sampler, or provide a sampling +rate which will be used to setup a Parent + TraceID Ratio strategy. + +Hive Gateway openTelemetrySetup() (recommended),
OpenTelemetry NodeSDK
]}> + + + +```ts filename="telemetry.ts" +import { openTelemetrySetup } from '@graphql-hive/plugin-opentelemetry/setup' +import { JaegerRemoteSampler } from '@opentelemetry/sampler-jaeger-remote' +import { AlwaysOnSampler } from '@opentelemetry/sdk-trace-base' + +openTelemetrySetup({ + // Use Parent + TraceID Ratio strategy + samplingRate: 0.1, + + // Or use a custom Sampler + sampler: new JaegerRemoteSampler({ + endpoint: 'http://your-jaeger-agent:14268/api/sampling', + serviceName: 'your-service-name', + initialSampler: new AlwaysOnSampler(), + poolingInterval: 60000 // 60 seconds + }) +}) +``` + + + + + +```ts filename="telemetry.ts" +import { JaegerRemoteSampler } from '@opentelemetry/sampler-jaeger-remote' +import { NodeSDK, tracing } from '@opentelemetry/sdk-node' + +new NodeSDK({ + // Use Parent + TraceID Ratio strategy + sampler: new ParentBasedSampler({ + root: new TraceIdRatioBasedSampler(0.1) + }), + + // Or use a custom Sampler + sampler: new JaegerRemoteSampler({ + endpoint: 'http://your-jaeger-agent:14268/api/sampling', + serviceName: 'your-service-name', + initialSampler: new tracing.AlwaysOnSampler(), + poolingInterval: 60000 // 60 seconds + }) +}).start() +``` + + + +
+ +#### Limits + +To ensure that you don't overwhelm your tracing injestion infrastructure, you can set limits for +both cardinality and amount of data the OpenTelemetry SDK will be allowed to generate. + +Hive Gateway openTelemetrySetup() (recommended),
OpenTelemetry NodeSDK
]}> + + + +```ts filename="telemetry.ts" +import { openTelemetrySetup } from '@graphql-hive/plugin-opentelemetry/setup' + +openTelemetrySetup({ + generalLimits: {}, + traces: { + spanLimits: {} + } +}) +``` + + + + + +```ts filename="telemetry.ts" +import { JaegerRemoteSampler } from '@opentelemetry/sampler-jaeger-remote' +import { NodeSDK, tracing } from '@opentelemetry/sdk-node' + +new NodeSDK({ + generalLimits: {}, + spanLimits: {} +}).start() +``` + + + +
+ +### Configuration + +Once you have an OpenTelemetry setup file, you must import it from you `gateway.config.ts` file. It +must be the very first import so that any other package relying on OpenTelemetry have access to the +correct configuration. + +You can then enable OpenTelemetry Tracing support in the Gateway configuration. + + + + + +with CLI, you can either enable OpenTelemetry tracing by using `--opentelemetry` option or by using +the configuration file. + + + + + +```bash +hive-gateway supergraph --opentelemetry +``` + + + + + +```ts filename="gateway.config.ts" +import './telemetry.ts' +import { defineConfig } from '@graphql-hive/gateway' + +export const gatewayConfig = defineConfig({ + openTelemetry: { + traces: true + } +}) +``` + + + + + + + + + +```sh npm2yarn +npm i @graphql-mesh/plugin-opentelemetry +``` + +```ts filename="index.ts" +import './telemetry.ts' +import { createGatewayRuntime } from '@graphql-hive/gateway-runtime' +import { useOpenTelemetry } from '@graphql-mesh/plugin-opentelemetry' + +export const gateway = createGatewayRuntime({ + plugins: ctx => [ + useOpenTelemetry({ + ...ctx, + traces: true + }) + ] +}) +``` + + + + + +#### OpenTelemetry Context + +To correlate all observability events (like tracing, metrics, logs...), OpenTelemetry have a global +and standard Context API. + +This context also allows to keep the link between related spans (for parenting or linking of spans). + +You can configure the behaviour of the plugin with this context. + +```ts filename="gateway.config.ts" +const openTelemetryConfig = { + useContextManager: true, // If false, the parenting of spans will not rely on OTEL Context + inheritContext: true, // If false, the root span will not be based on OTEL Context, it will always be a root span + propagateContext: true // If false, the context will not be propagated to subgraphs +}) +``` + +#### OpenTelemetry Diagnostics + +If you encounter an issue with you OpenTelemetry setup, you can enable the Diagnostics API. This +will enable logging of OpenTelemetry SDK based on `OTEL_LOG_LEVEL` env variable. + +By default, Hive Gateway configure the Diagnostics API to output logs using Hive Gateway's logger. +You can disable this using `configureDiagLogger` option. + +```ts filename="gateway.config.ts" +const openTelemetryConfig = { + // Use the default DiagLogger, which outputs logs directly to stdout + configureDiagLogger: false +} +``` + +#### Graceful shutdown + +Since spans are batched by default, it is possible to miss some traces if the batching processor is +not properly flused when the process exits. + +To avoid this kind of data loss, Hive Gateway is calling `forceFlush` method on the registered +Tracer Provider by default. You can customize which method to call or entirely disable this +behaviour by using the `flushOnDispose` option. + +```ts filename="gateway.config.ts" +const openTelemetryConfig = { + // Disable the auto-flush on shutdown + flushOnDispose: false, + // or call a custom method + flushOnDispose: 'flush' +} +``` + +#### Tracer + +By default, Hive Gateway will create a tracer named `gateway`. You can provide your own tracer if +needed. + +```ts filename="gateway.config.ts" +const openTelemetryConfig = { + traces: { + tracer: trace.getTracer('my-custom-tracer') + } +} +``` + +### Reported Spans + +The plugin exports the following OpenTelemetry Spans: + +#### Background Spans + +
+ +Gateway Initialization + +By default, the plugin will create a span from the start of the gateway process to the first schema +load. + +All spans hapenning during this time will be parented under this initialisation span, including the +schema loading span. + +You may disable this by setting `traces.spans.initialization` to `false`: + +```ts +const openTelemetryConfig = { + traces: { + spans: { + initialization: false + } + } +} +``` + +
+ +
+ +Schema Loading + +By default, the plugin will create a span covering each loading of a schema. It can be useful when +polling or file watch is enabled to identify when the schema changes. + +Schema loading in Hive Gateway can be lazy, which means it can be triggered as part of the handling +of a request. If it happens, the schema loading span will be added as a link to the current span. + +You may disable this by setting `traces.spans.schema` to `false`: + +```ts +const openTelemetryConfig = { + traces: { + spans: { + schema: false + } + } +} +``` + +
+ +#### Request Spans + +
+ +HTTP Request + + + This span is created for each incoming HTTP request, and acts as a root span for the entire + request. Disabling this span will also disable the other hooks and spans. + + +By default, the plugin will create a root span for the HTTP layer as a span (` /path`, eg. +`POST /graphql`) with the following attributes: + +- `http.method`: The HTTP method +- `http.url`: The HTTP URL +- `http.route`: The HTTP status code - `http.scheme`: The HTTP scheme - `http.host`: The HTTP host - `net.host.name`: The hostname @@ -686,42 +1259,91 @@ And the following attributes for the HTTP response: [`Exception`](https://opentelemetry.io/docs/specs/otel/trace/exceptions/). -You may disable this by setting `spans.http` to `false`: +You may disable this by setting `traces.spans.http` to `false`: ```ts const openTelemetryConfig = { - exporters: [ - /* ... */ - ], - spans: { - /* ... */ - http: false + traces: { + spans: { + http: false + } } } ``` -Or, you may filter the spans by setting the `spans` configuration to a function: +Or, you may filter the spans by setting the `traces.spans.http` configuration to a function: ```ts filename="gateway.config.ts" const openTelemetryConfig = { - exporters: [ - /* ... */ - ], - spans: { - /* ... */ - http: payload => { - // Filter the spans based on the payload - return true + traces: { + spans: { + http: ({ request }) => { + // Filter the spans based on the request + return true + } } } } ``` +
+ +
+ +GraphQL Operation + + + This span is created for each GraphQL operation found in incoming HTTP requests, and acts as a + parent span for the entire graphql operation. Disabling this span will also disable the other + hooks and spans related to the execution of operation. + + +By default, the plugin will create a span for the GraphQL layer as a span +(`graphql.operation ` or `graphql.operation` for unexecutable operations) with the +following attributes: + +- `graphql.operation.type`: The type of operation (`query`, `mutation` or `subscription`). +- `grapqhl.operation.name`: The name of the operation to execute, `Anonymous` for operations without + name. +- `graphql.document`: The operation document as a GraphQL string + - The `payload` object is the same as the one passed to the [`onRequest` - hook](https://github.com/ardatan/whatwg-node/blob/master/packages/server/src/plugins/types.ts#L16-L25). + An error in the parse phase will be reported as an [error + span](https://opentelemetry.io/docs/specs/semconv/exceptions/exceptions-spans/), including the + error message and as an OpenTelemetry + [`Exception`](https://opentelemetry.io/docs/specs/otel/trace/exceptions/). +You may disable this by setting `traces.spans.graphql` to `false`: + +```ts +const openTelemetryConfig = { + traces: { + traces: { + spans: { + graphql: false + } + } + } +} +``` + +Or, you may filter the spans by setting the `traces.spans.graphql` configuration to a function which +takes the GraphQL context as parameter: + +```ts filename="gateway.config.ts" +const openTelemetryConfig = { + traces: { + spans: { + graphql: ({ context }) => { + // Filter the span based on the GraphQL context + return true + } + } + } +} +``` +
@@ -734,6 +1356,10 @@ following attributes: - `graphql.document`: The GraphQL query string - `graphql.operation.name`: The operation name +If a parsing error is reported, the following attribute will also be present: + +- `graphql.error.count`: `1` if a parse error occured + An error in the parse phase will be reported as an [error span](https://opentelemetry.io/docs/specs/semconv/exceptions/exceptions-spans/), including the @@ -741,301 +1367,746 @@ following attributes: [`Exception`](https://opentelemetry.io/docs/specs/otel/trace/exceptions/). -You may disable this by setting `spans.graphqlParse` to `false`: +You may disable this by setting `traces.spans.graphqlParse` to `false`: ```ts filename="gateway.config.ts" const openTelemetryConfig = { - exporters: [ - /* ... */ - ], - spans: { - /* ... */ - graphqlParse: false + traces: { + spans: { + graphqlParse: false + } } } ``` -Or, you may filter the spans by setting the `spans` configuration to a function: +Or, you may filter the spans by setting the `traces.spans.graphqlParse` configuration to a function: ```ts filename="gateway.config.ts" const openTelemetryConfig = { - exporters: [ - /* ... */ - ], - spans: { - /* ... */ - graphqlParse: payload => { - // Filter the spans based on the payload - return true + traces: { + spans: { + graphqlParse: ({ context }) => { + // Filter the spans based on the GraphQL context + return true + } + } + } +} +``` + +
+ +
+ +GraphQL Validate + +By default, the plugin will report the validation phase as a span (`graphql.validate`) with the +following attributes: + +- `graphql.document`: The GraphQL query string +- `graphql.operation.name`: The operation name + +If a validation error is reported, the following attribute will also be present: + +- `graphql.error.count`: The number of validation errors + + + An error in the validate phase will be reported as an [error + span](https://opentelemetry.io/docs/specs/semconv/exceptions/exceptions-spans/), including the + error message and as an OpenTelemetry + [`Exception`](https://opentelemetry.io/docs/specs/otel/trace/exceptions/). + + +You may disable this by setting `traces.spans.graphqlValidate` to `false`: + +```ts filename="gateway.config.ts" +const openTelemetryConfig = { + traces: { + spans: { + graphqlValidate: false + } + } +} +``` + +Or, you may filter the spans by setting the `traces.spans.graphqlValidate` configuration to a +function: + +```ts filename="gateway.config.ts" +const openTelemetryConfig = { + traces: { + spans: { + graphqlValidate: ({ context }) => { + // Filter the spans based on the GraphQL context + return true + } + } + } +} +``` + +
+ +
+ +Graphql Context Building + +By default, the plugin will report the validation phase as a span (`graphql.context`). This span +doesn't have any attribute. + + + An error in the context building phase will be reported as an [error + span](https://opentelemetry.io/docs/specs/semconv/exceptions/exceptions-spans/), including the + error message and as an OpenTelemetry + [`Exception`](https://opentelemetry.io/docs/specs/otel/trace/exceptions/). + + +You may disable this by setting `traces.spans.graphqlContextBuilding` to `false`: + +```ts filename="gateway.config.ts" +const openTelemetryConfig = { + traces: { + spans: { + graphqlContextBuilding: false + } + } +} +``` + +Or, you may filter the spans by setting the `traces.spans.graphqlContextBuilding` configuration to a +function: + +```ts filename="gateway.config.ts" +const openTelemetryConfig = { + traces: { + spans: { + graphqlContextBuilding: ({ context }) => { + // Filter the spans based on the GraphQL context + return true + } + } + } +} +``` + +
+ +
+ +GraphQL Execute + +By default, the plugin will report the execution phase as a span (`graphql.execute`) with the +following attributes: + +- `graphql.document`: The GraphQL query string +- `graphql.operation.name`: The operation name (`Anonymous` for operations without name) +- `graphql.operation.type`: The operation type (`query`/`mutation`/`subscription`) + +If an execution error is reported, the following attribute will also be present: + +- `graphql.error.count`: The number of errors in the execution result + + + An error in the execute phase will be reported as an [error + span](https://opentelemetry.io/docs/specs/semconv/exceptions/exceptions-spans/), including the + error message and as an OpenTelemetry + [`Exception`](https://opentelemetry.io/docs/specs/otel/trace/exceptions/). + + +You may disable this by setting `traces.spans.graphqlExecute` to `false`: + +```ts filename="gateway.config.ts" +const openTelemetryConfig = { + traces: { + spans: { + graphqlExecute: false + } + } +} +``` + +Or, you may filter the spans by setting the `traces.spans.graphqlExecute` configuration to a +function: + +```ts filename="gateway.config.ts" +const openTelemetryConfig = { + traces: { + spans: { + graphqlExecute: ({ context }) => { + // Filter the spans based on the GraphQL context + return true + } + } + } +} +``` + +
+ +
+ +Subgraph Execute + +By default, the plugin will report the subgraph execution phase as a client span +(`subgraph.execute`) with the following attributes: + +- `graphql.document`: The GraphQL query string executed to the upstream +- `graphql.operation.name`: The operation name +- `graphql.operation.type`: The operation type (`query`/`mutation`/`subscription`) +- `gateway.upstream.subgraph.name`: The name of the upstream subgraph + +You may disable this by setting `traces.spans.subgraphExecute` to `false`: + +```ts filename="gateway.config.ts" +const openTelemetryConfig = { + traces: { + spans: { + subgraphExecute: false + } + } +} +``` + +Or, you may filter the spans by setting the `traces.spans.subgraphExecute` configuration to a +function: + +```ts filename="gateway.config.ts" +const openTelemetryConfig = { + traces: { + spans: { + subgraphExecute: ({ executionRequest, subgraphName }) => { + // Filter the spans based on the target SubGraph name and the Execution Request + return true + } + } + } +} +``` + +
+ +
+ +Upstream Fetch + +By default, the plugin will report the upstream fetch phase as a span (`http.fetch`) with the +information about outgoing HTTP calls. + +The following attributes are included in the span: + +- `http.method`: The HTTP method +- `http.url`: The HTTP URL +- `http.route`: The HTTP status code +- `http.scheme`: The HTTP scheme +- `net.host.name`: The hostname +- `http.host`: The HTTP host +- `http.request.resend_count`: Number of retry attempt. Only present starting from the first retry. + +And the following attributes for the HTTP response: + +- `http.status_code`: The HTTP status code + + + An error in the fetch phase (including responses with a non-ok status code) will be reported as an + [error span](https://opentelemetry.io/docs/specs/semconv/exceptions/exceptions-spans/), including + the error message and as an OpenTelemetry + [`Exception`](https://opentelemetry.io/docs/specs/otel/trace/exceptions/). + + +You may disable this by setting `traces.spans.upstreamFetch` to `false`: + +```ts filename="gateway.config.ts" +const openTelemetryConfig = { + traces: { + spans: { + upstreamFetch: false + } + } +} +``` + +Or, you may filter the spans by setting the `traces.spans.upstreamFetch` configuration to a +function: + +```ts filename="gateway.config.ts" +const openTelemetryConfig = { + traces: { + spans: { + upstreamFetch: ({ executionRequest }) => { + // Filter the spans based on the Execution Request + return true + } } } } ``` - - The `payload` object is the same as the one passed to the [`onParse` - hook](https://the-guild.dev/graphql/envelop/v4/plugins/lifecycle#before). - -
+### Reported Events + +The plugin exports the folowing OpenTelemetry Events. + +Events are attached to the current span, meaning that they will be attached to your custom spans if +you use them. It also means that events can be orphans if you didn't properly setup an asyn +compatible Context Manager +
-GraphQL Validate +Cache Read and Write -By default, the plugin will report the validation phase as a span (`graphql.validate`) with the -following attributes: +By default, the plugin will report any cache read or write as an event. The possible event names +are: -- `graphql.document`: The GraphQL query string -- `graphql.operation.name`: The operation name +- `gateway.cache.miss`: A cache read happened, but the key didn't match any entity +- `gateway.cache.hit`: A cache read happened, and the key did match an entity +- `gateway.cache.write`: A new entity have been added to the cache store - - An error in the validate phase will be reported as an [error - span](https://opentelemetry.io/docs/specs/semconv/exceptions/exceptions-spans/), including the - error message and as an OpenTelemetry - [`Exception`](https://opentelemetry.io/docs/specs/otel/trace/exceptions/). - +All those events have the following attributes: + +- `gateway.cache.key`: The key of the cache entry +- `gateway.cache.ttl`: The ttl of the cache entry -You may disable this by setting `spans.graphqlValidate` to `false`: +You may disable this by setting `traces.events.cache` to `false`: ```ts filename="gateway.config.ts" const openTelemetryConfig = { - exporters: [ - /* ... */ - ], - spans: { - /* ... */ - graphqlValidate: false + traces: { + events: { + cache: false + } } } ``` -Or, you may filter the spans by setting the `spans` configuration to a function: +Or, you may filter the spans by setting the `traces.spans.upstreamFetch` configuration to a +function: ```ts filename="gateway.config.ts" const openTelemetryConfig = { - exporters: [ - /* ... */ - ], - spans: { - /* ... */ - graphqlValidate: payload => { - // Filter the spans based on the payload - return true + traces: { + events: { + cache: ({ key, action }) => { + // Filter the event based on action ('read' or 'write') and the entity key + return true + } } } } ``` - - The `payload` object is the same as the one passed to the [`onValidate` - hook](https://the-guild.dev/graphql/envelop/v4/plugins/lifecycle#before-1). - -
-GraphQL Execute - -By default, the plugin will report the execution phase as a span (`graphql.execute`) with the -following attributes: +Cache Error -- `graphql.document`: The GraphQL query string -- `graphql.operation.name`: The operation name -- `graphql.operation.type`: The operation type (`query`/`mutation`/`subscription`) +By default, the plugin will report any cache error as an event (`gateway.cache.error`). This events +have the following attributes: - - An error in the execute phase will be reported as an [error - span](https://opentelemetry.io/docs/specs/semconv/exceptions/exceptions-spans/), including the - error message and as an OpenTelemetry - [`Exception`](https://opentelemetry.io/docs/specs/otel/trace/exceptions/). - +- `gateway.cache.key`: The key of the cache entry +- `gateway.cache.ttl`: The ttl of the cache entry +- `gateway.cache.action`: The type of action (`read` or `write`) +- `exception.type`: The type of error (the `code` if it exists, the message otherwise) +- `exception.message`: The message of the error +- `exception.stacktrace`: The error stacktrace as a string -You may disable this by setting `spans.graphqlExecute` to `false`: +You may disable this by setting `traces.events.cache` to `false`: ```ts filename="gateway.config.ts" const openTelemetryConfig = { - exporters: [ - /* ... */ - ], - spans: { - /* ... */ - graphqlExecute: false + traces: { + events: { + cache: false + } } } ``` -Or, you may filter the spans by setting the `spans` configuration to a function: +Or, you may filter the spans by setting the `traces.spans.upstreamFetch` configuration to a +function: ```ts filename="gateway.config.ts" const openTelemetryConfig = { - exporters: [ - /* ... */ - ], - spans: { - /* ... */ - graphqlExecute: payload => { - // Filter the spans based on the payload - return true + traces: { + events: { + cache: ({ key, action }) => { + // Filter the event based on action ('read' or 'write') and the entity key + return true + } } } } ``` - - The `payload` object is the same as the one passed to the [`onExecute` - hook](https://the-guild.dev/graphql/envelop/v4/plugins/lifecycle#before-3). - -
-
+### Custom spans -Subgraph Execute +Hive Gateway relys on official OpenTelemetry API, which means it is compatible with +`@opentelemetry/api`. -By default, the plugin will report the subgraph execution phase as a span (`subgraph.execute`) with -the following attributes: +You can use any tool relying on it too, or directly use it to create your own custom spans. -- `graphql.document`: The GraphQL query string executed to the upstream -- `graphql.operation.name`: The operation name -- `graphql.operation.type`: The operation type (`query`/`mutation`/`subscription`) -- `gateway.upstream.subgraph.name`: The name of the upstream subgraph +To parent spans correctly, an async compatible Context Manager is highly recommended, but we also +provide an alternative if your runtime doesn't implement `AsyncLocalStorage` or you want to avoid +the related performance cost. -In addition, the span will include the following attributes for the HTTP requests; + -- `http.method`: The HTTP method -- `http.url`: The HTTP URL -- `http.route`: The HTTP status code -- `http.scheme`: The HTTP scheme -- `net.host.name`: The hostname -- `http.host`: The HTTP host + -And the following attributes for the HTTP response: +If you are using an async compatible context manager, you can simply use the standard +`@opentelemetry/api` methods, as shown in +[OpenTelemetry documentation](https://opentelemetry.io/docs/languages/js/instrumentation/#create-spans). -- `http.status_code`: The HTTP status code + + + -You may disable this by setting `spans.subgraphExecute` to `false`: +The Gateway's tracer is available in the graphql context. If you don't have access to the context, +you can either create your own tracer, or manually instanciate the opentelemetry plugin (see +Programatic Usage). ```ts filename="gateway.config.ts" -const openTelemetryConfig = { - exporters: [ - /* ... */ +import { defineConfig } from '@graphql-hive/gateway' +import { openTelemetrySetup } from '@graphql-mesh/plugin-opentelemetry/setup' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' +import { useGenericAuth } from '@envelop/generic-auth' + +openTelemetrySetup({ + contextManager: new AsyncLocalStorageContextManager(), + traces: { console: true }, +}) + +export const gatewayConfig = defineConfig({ + openTelemetry: { + traces: true, + }, + plugins: () => [ + useGenericAuth({ + resolveUserFn: ({ context }) => + // `startActiveSpan` will rely on the current context to parent the new span correctly + // You can also use your own tracer instead of Hive Gateway's one. + context.openTelemetry.tracer.startActiveSpan('users.fetch', (span) => { + const user = await fetchUser(extractUserIdFromContext(context)) + span.end(); + return user + }) + }), ], - spans: { - /* ... */ - subgraphExecute: false +}) +``` + + + + + +The Gateway's tracer is available in the graphql context (`context.openTelemetry.tracer`). If you +don't have access to the graphql context, you can either create your own tracer, or use `getTracer` +plugin method (as shown in this example). + +```ts filename="index.ts" +import './telemetry.ts' +import { createGatewayRuntime } from '@graphql-hive/gateway-runtime' +import { useOpenTelemetry } from '@graphql-mesh/plugin-opentelemetry' +import { openTelemetrySetup } from '@graphql-mesh/plugin-opentelemetry/setup' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' +import { useGenericAuth } from '@envelop/generic-auth' + +openTelemetrySetup({ + contextManager: new AsyncLocalStorageContextManager(), + traces: { console: true }, +}) + +export const gateway = createGatewayRuntime({ + plugins: ctx => { + const otelPlugin = useOpenTelemetry({ + ...ctx, + traces: true + }) + + return [ + otelPlugin, + useGenericAuth({ + resolveUserFn: ({ context }) => + // `startActiveSpan` will rely on the current context to parent the new span correctly + // You can also use your own tracer instead of Hive Gateway's one. + otelPlugin.getTracer().startActiveSpan('users.fetch', (span) => { + const user = await fetchUser(extractUserIdFromContext(context)) + span.end(); + return user + }) + }), + ], } -} +}) ``` -Or, you may filter the spans by setting the `spans.subgraphExecute` configuration to a function: + + + + + + + + +If you can't or don't want to use the Context Manager, Hive Gateway provides a cross plateform +context tracking mechanism. + +To parent spans correctly, you will have to manually provide the current OTEL context. You can +retreive the current OTEL context by either using the `context.openTelemetry.activeContext` +function, or the plugin's method `getOtelContext()`. + + + + + +If you don't have access to the graphql context, manually instanciate the OpenTelemetry plugin (see +Programatic Usage). ```ts filename="gateway.config.ts" -const openTelemetryConfig = { - exporters: [ - /* ... */ +import { defineConfig } from '@graphql-hive/gateway' +import { openTelemetrySetup } from '@graphql-mesh/plugin-opentelemetry/setup' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' +import { useGenericAuth } from '@envelop/generic-auth' + +openTelemetrySetup({ + contextManager: null, // Don't register any context manager + traces: { console: true }, +}) + +export const gatewayConfig = defineConfig({ + openTelemetry: { + useContextManager: false, // Make sure to disable context manager usage + traces: true, + }, + plugins: () => [ + useGenericAuth({ + resolveUserFn: ({ context }) => { + const ctx = context.openTelemetry.activeContext(); + + // Explicitly pass the parent context as the third argument. + return context.openTelemetry.tracer.startActiveSpan('users.fetch', {}, ctx, (span) => { + const user = await fetchUser(extractUserIdFromContext(context)) + span.end(); + return user + }) + } + }), ], - spans: { - /* ... */ - subgraphExecute: payload => { - // Filter the spans based on the payload - return true - } +}) +``` + + + + + +The Gateway's tracer is available in the graphql context (`context.openTelemetry.tracer`). If you +don't have access to the graphql context, you can either create your own tracer, or use `getTracer` +plugin method (as shown in this example). + +```ts filename="index.ts" +import './telemetry.ts' +import { createGatewayRuntime } from '@graphql-hive/gateway-runtime' +import { useOpenTelemetry } from '@graphql-mesh/plugin-opentelemetry' +import { openTelemetrySetup } from '@graphql-mesh/plugin-opentelemetry/setup' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' +import { useGenericAuth } from '@envelop/generic-auth' + +openTelemetrySetup({ + contextManager: null, // Don't register any context manager + traces: { console: true }, +}) + +export const gateway = createGatewayRuntime({ + plugins: ctx => { + const otelPlugin = useOpenTelemetry({ + ...ctx, + useContextManager: false, // Make sure to disable context manager usage + traces: true + }) + + return [ + otelPlugin, + useGenericAuth({ + resolveUserFn: ({ context }) => { + // Retreive current OTEL Context by passing an object containing the graphql `context`, + // the http `request` and/or the `executionRequest`. The appropriate context will be + // found based on which of those properties are provided. + const ctx = otelPlugin.getOtelContext({ context }); + + // Explicitly pass the parent context as the third argument. + return otelPlugin.getTracer().startActiveSpan('users.fetch', {}, ctx, (span) => { + const user = await fetchUser(extractUserIdFromContext(context)) + span.end(); + return user + }) + } + }), + ], } -} +}) ``` - - The `payload` object is the same as the one passed to the [`onSubgraphHook` - hook](/docs/gateway/other-features/custom-plugins#onsubgraphexecute). - + -
+ -
+ -Upstream Fetch + -By default, the plugin will report the upstream fetch phase as a span (`http.fetch`) with the -information about outgoing HTTP calls. +### Custom Span Attributes, Events and Links -The following attributes are included in the span: +You can add custom attribute to Hive Gateway's spans by using the standard `@opentelemetry/api` +package. You can use the same package to record custom +[Events](https://opentelemetry.io/docs/languages/js/instrumentation/#span-events) or +[Links](https://opentelemetry.io/docs/languages/js/instrumentation/#span-links). -- `http.method`: The HTTP method -- `http.url`: The HTTP URL -- `http.route`: The HTTP status code -- `http.scheme`: The HTTP scheme -- `net.host.name`: The hostname -- `http.host`: The HTTP host +This can be done by getting access to the current span. -And the following attributes for the HTTP response: +If you have an async compatible Context Manager setup, you can use the standard OpenTelemetry API to +retreive the current span as shown in +[OpenTelemetry documentation](https://opentelemetry.io/docs/languages/js/instrumentation/#get-the-current-span). -- `http.status_code`: The HTTP status code +Otherwise, Hive Gateway provide it's own cross-runtime Context tracking mechanism. In this case, you +can use +[`trace.getSpan` standard function](https://opentelemetry.io/docs/languages/js/instrumentation/#get-a-span-from-context) +to get access to the current span. + + + + + +If you are using an async compatible context manager, you can simply use the standard +`@opentelemetry/api` methods, as shown in +[OpenTelemetry documentation](https://opentelemetry.io/docs/languages/js/instrumentation/#create-spans). + + -You may disable this by setting `spans.upstreamFetch` to `false`: + ```ts filename="gateway.config.ts" -const openTelemetryConfig = { - exporters: [ - /* ... */ +import { defineConfig } from '@graphql-hive/gateway' +import { openTelemetrySetup } from '@graphql-mesh/plugin-opentelemetry/setup' +import { trace } from '@opentelemetry/api' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' +import { useGenericAuth } from '@envelop/generic-auth' + +openTelemetrySetup({ + contextManager: new AsyncLocalStorageContextManager(), + traces: { console: true }, +}) + +export const gatewayConfig = defineConfig({ + openTelemetry: { + traces: true, + }, + plugins: () => [ + useGenericAuth({ + resolveUserFn: ({ context }) => { + const span = trace.getActiveSpan(); + const user = await fetchUser(extractUserIdFromContext(context)) + span.setAttribute('user.id', user.id); + return user + } + }), ], - spans: { - /* ... */ - upstreamFetch: false - } -} +}) ``` -Or, you may filter the spans by setting the `spans.upstreamFetch` configuration to a function: + -```ts filename="gateway.config.ts" -const openTelemetryConfig = { - exporters: [ - /* ... */ - ], - spans: { - /* ... */ - upstreamFetch: payload => { - // Filter the spans based on the payload - return true - } + + +```ts filename="index.ts" +import './telemetry.ts' +import { createGatewayRuntime } from '@graphql-hive/gateway-runtime' +import { useOpenTelemetry } from '@graphql-mesh/plugin-opentelemetry' +import { openTelemetrySetup } from '@graphql-mesh/plugin-opentelemetry/setup' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' +import { useGenericAuth } from '@envelop/generic-auth' + +openTelemetrySetup({ + contextManager: new AsyncLocalStorageContextManager(), + traces: { console: true }, +}) + +export const gateway = createGatewayRuntime({ + plugins: ctx => { + const otelPlugin = useOpenTelemetry({ + ...ctx, + traces: true + }) + + return [ + otelPlugin, + useGenericAuth({ + resolveUserFn: ({ context }) => { + const span = trace.getActiveSpan(); + const user = await fetchUser(extractUserIdFromContext(context)) + span.setAttribute('user.id', user.id); + return user + } + }), + ], } -} +}) ``` - - The `payload` object is the same as the one passed to the [`onFetch` - hook](/docs/gateway/other-features/custom-plugins#onfetch). - + -
+ + + -### Context Propagation + -By default, the plugin will -[propagate the trace context](https://opentelemetry.io/docs/concepts/context-propagation/) between -the incoming HTTP request and the outgoing HTTP requests. +If you can't or don't want to use the Context Manager, Hive Gateway provides a cross plateform +context tracking mechanism. -You may disable this by setting `inheritContext` or `propagateContext` to `false`: +To parent spans correctly, you will have to manually provide the current OTEL context. You can +retreive the current OTEL context by either using the `context.openTelemetry.activeContext` +function, or the plugin's method `getOtelContext()`. - + +If you don't have access to the graphql context, manually instanciate the OpenTelemetry plugin (see +Programatic Usage). + ```ts filename="gateway.config.ts" import { defineConfig } from '@graphql-hive/gateway' +import { openTelemetrySetup } from '@graphql-mesh/plugin-opentelemetry/setup' +import { trace } from '@opentelemetry/api' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' +import { useGenericAuth } from '@envelop/generic-auth' + +openTelemetrySetup({ + contextManager: null, // Don't register any context manager + traces: { console: true }, +}) export const gatewayConfig = defineConfig({ openTelemetry: { - exporters: [ - /* ... */ - ], - // Controls the propagation of the trace context between the incoming HTTP request and Hive Gateway - inheritContext: false, - // Controls the propagation of the trace context between Hive Gateway and the upstream HTTP requests - propagateContext: false - } + useContextManager: false, // Make sure to disable context manager usage + traces: true, + }, + plugins: () => [ + useGenericAuth({ + resolveUserFn: ({ context }) => { + const ctx = context.openTelemetry.activeContext(); + const span = trace.getSpan(ctx) + + const user = await fetchUser(extractUserIdFromContext(context)) + span.setAttribute('user.id', user.id); + + return user + } + }), + ], }) ``` @@ -1043,23 +2114,48 @@ export const gatewayConfig = defineConfig({ -```ts filename="gateway.config.ts" +The Gateway's tracer is available in the graphql context (`context.openTelemetry.tracer`). If you +don't have access to the graphql context, you can either create your own tracer, or use `getTracer` +plugin method (as shown in this example). + +```ts filename="index.ts" +import './telemetry.ts' import { createGatewayRuntime } from '@graphql-hive/gateway-runtime' import { useOpenTelemetry } from '@graphql-mesh/plugin-opentelemetry' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' +import { useGenericAuth } from '@envelop/generic-auth' + +openTelemetrySetup({ + contextManager: null, // Don't register any context manager + traces: { console: true }, +}) export const gateway = createGatewayRuntime({ - plugins: ctx => [ - useOpenTelemetry({ + plugins: ctx => { + const otelPlugin = useOpenTelemetry({ ...ctx, - exporters: [ - /* ... */ - ], - // Controls the propagation of the trace context between the incoming HTTP request and Hive Gateway - inheritContext: false, - // Controls the propagation of the trace context between Hive Gateway and the upstream HTTP requests - propagateContext: false + useContextManager: false, // Make sure to disable context manager usage + traces: true }) - ] + + return [ + otelPlugin, + useGenericAuth({ + resolveUserFn: ({ context }) => { + // Retreive current OTEL Context by passing an object containing the graphql `context`, + // the http `request` and/or the `executionRequest`. The appropriate context will be + // found based on which of those properties are provided. + const ctx = otelPlugin.getOtelContext({ context }); + const span = trace.getSpan(ctx) + + const user = await fetchUser(extractUserIdFromContext(context)) + span.setAttribute('user.id', user.id); + + return user + } + }), + ], + } }) ``` @@ -1067,6 +2163,10 @@ export const gateway = createGatewayRuntime({ + + + + ### Troubleshooting The default behavor of the plugin is to log errors and warnings to the console. @@ -1081,12 +2181,14 @@ In addition, you can use the stdout exporter to log the traces to the console: -```ts filename="gateway.config.ts" -import { createStdoutExporter, defineConfig } from '@graphql-hive/gateway' +```ts filename="telemetry.ts" +import { openTelemetrySetup } from '@graphql-hive/plugin-opentelemetry/setup' +import { AsyncLocalStorageContextManager } from '@opentelemetry/context-async-hooks' -export const gatewayConfig = defineConfig({ - openTelemetry: { - exporters: [createStdoutExporter()] +openTelemetrySetup({ + contextManager: new AsyncLocalStorageContextManager + traces: { + console: true } }) ``` @@ -1095,24 +2197,13 @@ export const gatewayConfig = defineConfig({ - - Beware that OpenTelemetry JavaScript SDK writes spans using `console.dir`. Meaning, - serverless/on-the-edge environments that don't support `console.dir` (like [Cloudflare - Workers](https://developers.cloudflare.com/workers/runtime-apis/console/)) wont show any logs. - - -```ts filename="gateway.config.ts" -import { createGatewayRuntime } from '@graphql-hive/gateway-runtime' -import { createStdoutExporter, useOpenTelemetry } from '@graphql-mesh/plugin-opentelemetry' +```ts filename="telemetry.ts" +import { NodeSDK, tracing } from '@opentelemetry/sdk-node' -export const gateway = createGatewayRuntime({ - plugins: ctx => [ - useOpenTelemetry({ - ...ctx, - exporters: [createStdoutExporter()] - }) - ] -}) +new NodeSDK({ + // Use `spanProcessors` instead of `traceExporter` to avoid the default batching configuration + spanProcessors: [new tracing.SimpleSpanProcessor(new tracing.ConsoleSpanExporter())] +}).start() ``` diff --git a/packages/web/docs/src/content/gateway/other-features/custom-plugins.mdx b/packages/web/docs/src/content/gateway/other-features/custom-plugins.mdx index 35405d4bd6..1d97b09cee 100644 --- a/packages/web/docs/src/content/gateway/other-features/custom-plugins.mdx +++ b/packages/web/docs/src/content/gateway/other-features/custom-plugins.mdx @@ -39,9 +39,9 @@ It have to be a function, which will be called each time the gateway have update For example, if polling is enabled, this function will be called for each poll. Most Hive Gateway plugins takes an object as a parameter, and expect some common components like a -`logger`, a `pubsub`, etc... Those components are given in parameters to the `plugins` function. It -is advised to spread the plugin's factory context into the plugins options, this way plugins will -have access to all components they need. +`log`, a `pubsub`, etc... Those components are given in parameters to the `plugins` function. It is +advised to spread the plugin's factory context into the plugins options, this way plugins will have +access to all components they need. ```ts filename="gateway.config.ts" import { defineConfig } from '@graphql-hive/gateway' @@ -172,7 +172,7 @@ Possible usage examples of the hooks are: | `setFetchFn` | Replace the `fetch` function that will be used to make the request. It should be compatible with standard `fetch` API. | | `executionRequest` | Present only if the request is an upstream subgraph request. It contains all information about the upstream query, notably the target subgraph name. | | `requestId` | A unique ID identifying the client request. This is used to correlate downstream and upstream requests across services. | -| `logger` | The logger instance for the specific request that includes the details of the request and the response. | +| `log` | The [Hive Logger](/docs/logger) instance for the specific request that includes the details of the request and the response. | ##### `onFetchDone` @@ -659,21 +659,21 @@ This hook has a before and after stage. You can return a function to hook into t This hook is mostly used for monitoring and tracing purposes. -| Field Name | Description | -| -------------------------- | ----------------------------------------------------------------------------------------------------------------------- | -| `supergraph` | The GraphQL schema of the supergraph. | -| `subgraph` | The name of the subgraph. | -| `sourceSubschema` | The schema of the subgraph. | -| `typeName` | The name of the type being planed. | -| `variables` | The variables provided in the client request. | -| `fragments` | The fragments provided in the client request. | -| `fieldNodes` | The field nodes of selection set being planned. | -| `context` | The GraphQL context object. | -| `requestId` | A unique ID identifying the client request. This is used to correlate downstream and upstream requests across services. | -| `logger` | The logger instance for the specific request that includes the details of the request and the response. | -| `info` | The `GraphQLResolveInfo` object of the client query. | -| `delegationPlanBuilder` | The delegation plan builder. | -| `setDelegationPlanBuilder` | Function to replace the current delegation plan builder. | +| Field Name | Description | +| -------------------------- | ---------------------------------------------------------------------------------------------------------------------------- | +| `supergraph` | The GraphQL schema of the supergraph. | +| `subgraph` | The name of the subgraph. | +| `sourceSubschema` | The schema of the subgraph. | +| `typeName` | The name of the type being planed. | +| `variables` | The variables provided in the client request. | +| `fragments` | The fragments provided in the client request. | +| `fieldNodes` | The field nodes of selection set being planned. | +| `context` | The GraphQL context object. | +| `requestId` | A unique ID identifying the client request. This is used to correlate downstream and upstream requests across services. | +| `log` | The [Hive Logger](/docs/logger) instance for the specific request that includes the details of the request and the response. | +| `info` | The `GraphQLResolveInfo` object of the client query. | +| `delegationPlanBuilder` | The delegation plan builder. | +| `setDelegationPlanBuilder` | Function to replace the current delegation plan builder. | ##### `onDelegationPlanDone` @@ -694,21 +694,21 @@ requires multiple stages to be fully resolved. This hooks has a before and after stage. You can return a function to hook into the after stage (see [`onDelegationStageExecuteDone`](#ondelegationstageexecutedone)). -| Payload Field | Description | -| -------------- | ----------------------------------------------------------------------------------------------------------------------- | -| `object` | The object being resolved. | -| `context` | The GraphQL context object. | -| `info` | The `GraphQLResolveInfo` object. | -| `subgraph` | The name of the subgraph. | -| `subschema` | The schema of the current subgraph. | -| `selectionSet` | The selection set node that will be queried. | -| `key` | The key for the entity being resolved. | -| `type` | The type of the entity being resolved. | -| `typeName` | The name of the type being resolved. | -| `resolver` | The resolver function for the merged type. | -| `setResolver` | Function to set a new resolver for the merged type. | -| `requestId` | A unique ID identifying the client request. This is used to correlate downstream and upstream requests across services. | -| `logger` | The logger instance for the specific request that includes the details of the request and the response. | +| Payload Field | Description | +| -------------- | ---------------------------------------------------------------------------------------------------------------------------- | +| `object` | The object being resolved. | +| `context` | The GraphQL context object. | +| `info` | The `GraphQLResolveInfo` object. | +| `subgraph` | The name of the subgraph. | +| `subschema` | The schema of the current subgraph. | +| `selectionSet` | The selection set node that will be queried. | +| `key` | The key for the entity being resolved. | +| `type` | The type of the entity being resolved. | +| `typeName` | The name of the type being resolved. | +| `resolver` | The resolver function for the merged type. | +| `setResolver` | Function to set a new resolver for the merged type. | +| `requestId` | A unique ID identifying the client request. This is used to correlate downstream and upstream requests across services. | +| `log` | The [Hive Logger](/docs/logger) instance for the specific request that includes the details of the request and the response. | ##### `onDelegationStageExecuteDone` @@ -733,17 +733,17 @@ This hook is invoked for ANY request that is sent to the subgraph. You can see [Prometheus plugin](/docs/gateway/authorization-authentication) for an example of how to use this hook. -| Payload Field | Description | -| --------------------- | ------------------------------------------------------------------------------------------------------- | -| `subgraph` | The GraphQL schema of the subgraph. | -| `subgraphName` | The name of the subgraph. | -| `transportEntry` | The transport entry that will be used to resolve queries for this subgraph | -| `executionRequest` | The execution request object containing details of the upstream GraphQL operation. | -| `setExecutionRequest` | Function to replace the current execution request. | -| `executor` | The executor function used to execute the upstream request. | -| `setExecutor` | Function to replace the current executor. | -| `requestId` | A unique ID identifying the client request. | -| `logger` | The logger instance for the specific request that includes the details of the request and the response. | +| Payload Field | Description | +| --------------------- | ---------------------------------------------------------------------------------------------------------------------------- | +| `subgraph` | The GraphQL schema of the subgraph. | +| `subgraphName` | The name of the subgraph. | +| `transportEntry` | The transport entry that will be used to resolve queries for this subgraph | +| `executionRequest` | The execution request object containing details of the upstream GraphQL operation. | +| `setExecutionRequest` | Function to replace the current execution request. | +| `executor` | The executor function used to execute the upstream request. | +| `setExecutor` | Function to replace the current executor. | +| `requestId` | A unique ID identifying the client request. | +| `log` | The [Hive Logger](/docs/logger) instance for the specific request that includes the details of the request and the response. | ##### `onSubgraphExecuteDone` @@ -1040,7 +1040,7 @@ return a `Promise` if `wrapped()` returns a `Promise`. ### Plugin Context -Hive Gateway comes with ready-to-use `logger`, `fetch`, cache storage and etc that are shared across +Hive Gateway comes with ready-to-use `log`, `fetch`, cache storage and etc that are shared across different components. We'd highly recommend you to use those available context values instead of creating your own for a specific plugin. @@ -1050,13 +1050,13 @@ import { defineConfig } from '@graphql-hive/gateway' export const gatewayConfig = defineConfig({ plugins({ fetch, // WHATWG compatible Fetch implementation. - logger, // Logger instance used by Hive Gateway + log, // Hive Logger instance used by Hive Gateway cwd, // Current working directory pubsub, // PubSub instance used by Hive Gateway cache // Cache storage used by Hive Gateway }) { return [ - useMyPlugin({ logger, fetch }) // So the plugin can use the shared logger and fetch + useMyPlugin({ log, fetch }) // So the plugin can use the shared logger and fetch ] } }) diff --git a/packages/web/docs/src/content/gateway/other-features/performance/index.mdx b/packages/web/docs/src/content/gateway/other-features/performance/index.mdx index 5cbba58288..d02fc900ad 100644 --- a/packages/web/docs/src/content/gateway/other-features/performance/index.mdx +++ b/packages/web/docs/src/content/gateway/other-features/performance/index.mdx @@ -156,7 +156,6 @@ export default { session: () => null }, cache: new CloudflareKVCacheStorage({ - logger, namespace: env.NAMESPACE }) }) diff --git a/packages/web/docs/src/content/gateway/other-features/testing/mocking.mdx b/packages/web/docs/src/content/gateway/other-features/testing/mocking.mdx index c055029475..9ed95f8a19 100644 --- a/packages/web/docs/src/content/gateway/other-features/testing/mocking.mdx +++ b/packages/web/docs/src/content/gateway/other-features/testing/mocking.mdx @@ -9,12 +9,21 @@ import { Callout } from '@theguild/components' Mocking your GraphQL API is a common practice when developing and testing your application. It allows you to simulate the behavior of your API without making real network requests. +## Installing + +Start by installing the `@graphql-mesh/plugin-mock` package: + +```sh npm2yarn +npm i @graphql-mesh/plugin-mock +``` + ## How to use? Add it to your plugins: ```ts filename="gateway.config.ts" -import { defineConfig, useMock } from '@graphql-hive/gateway' +import { defineConfig } from '@graphql-hive/gateway' +import { useMock } from '@graphql-mesh/plugin-mock' export const gatewayConfig = defineConfig({ plugins: [ @@ -38,7 +47,8 @@ The example above will replace the resolver of `User.firstName` with a mock that You can also provide a custom function to generate the mock value for a field: ```ts filename="gateway.config.ts" -import { defineConfig, useMock } from '@graphql-hive/gateway' +import { defineConfig } from '@graphql-hive/gateway' +import { useMock } from '@graphql-mesh/plugin-mock' import { fullName } from './user-mocks.js' export const gatewayConfig = defineConfig({ @@ -60,7 +70,8 @@ export const gatewayConfig = defineConfig({ You can mock types with custom mock functions like below; ```ts filename="gateway.config.ts" -import { defineConfig, useMock } from '@graphql-hive/gateway' +import { defineConfig } from '@graphql-hive/gateway' +import { useMock } from '@graphql-mesh/plugin-mock' import { user } from './user-mocks.js' export const gatewayConfig = defineConfig({ @@ -116,7 +127,8 @@ type User { ``` ```ts filename="gateway.config.ts" -import { defineConfig, useMock } from '@graphql-hive/gateway' +import { defineConfig } from '@graphql-hive/gateway' +import { useMock } from '@graphql-mesh/plugin-mock' export const gatewayConfig = defineConfig({ plugins: pluginCtx => [ @@ -160,7 +172,7 @@ using the store provided in the context `context.mockStore`; When having a schema that returns a list, in this case, a list of users: ```ts filename="init-store.ts" -import { MockStore } from '@graphql-hive/gateway' +import { MockStore } from '@graphql-mesh/plugin-mock' export const store = new MockStore() const users = [{ id: 'uuid', name: 'John Snow' }] @@ -184,7 +196,8 @@ type Query { ``` ```ts filename="gateway.config.ts" -import { defineConfig, useMock } from '@graphql-hive/gateway' +import { defineConfig } from '@graphql-hive/gateway' +import { useMock } from '@graphql-mesh/plugin-mock' import { store } from './init-store.js' export const gatewayConfig = defineConfig({ @@ -219,7 +232,8 @@ type Mutation { ``` ```ts filename="gateway.config.ts" -import { defineConfig, useMock } from '@graphql-hive/gateway' +import { defineConfig } from '@graphql-hive/gateway' +import { useMock } from '@graphql-mesh/plugin-mock' import { store } from './init-store.js' export const gatewayConfig = defineConfig({ diff --git a/packages/web/docs/src/content/gateway/subscriptions.mdx b/packages/web/docs/src/content/gateway/subscriptions.mdx index 2bc1e908ff..b843ab895c 100644 --- a/packages/web/docs/src/content/gateway/subscriptions.mdx +++ b/packages/web/docs/src/content/gateway/subscriptions.mdx @@ -493,7 +493,7 @@ subgraphs: subgraph_url: http://localhost:40002 ``` -You can then run the Rover command to generate the supegraph schema SDL: +You can then run the Rover command to generate the supergraph schema SDL: ```sh rover supergraph compose --config ./supergraph.yaml > supergraph.graphql @@ -529,7 +529,7 @@ export const composeConfig = defineConfig({ You can then run the Mesh command to generate the supergraph schema DSL: ```sh -npx mesh-compose > supegraph.graphql +npx mesh-compose > supergraph.graphql ``` For more details about how to use GraphQL Mesh, please refer to the diff --git a/packages/web/docs/src/content/gateway/usage-reporting.mdx b/packages/web/docs/src/content/gateway/usage-reporting.mdx index 4646bd137d..068b0d6da0 100644 --- a/packages/web/docs/src/content/gateway/usage-reporting.mdx +++ b/packages/web/docs/src/content/gateway/usage-reporting.mdx @@ -30,8 +30,8 @@ Before proceeding, make sure you have hive-gateway supergraph \ http://cdn.graphql-hive.com/artifacts/v1/12713322-4f6a-459b-9d7c-8aa3cf039c2e/supergraph \ --hive-cdn-key "" \ - --hive-usage-target "" \ - --hive-usage-access-token "" + --hive-target "" \ + --hive-access-token "" ``` @@ -45,8 +45,8 @@ docker run --rm --name hive-gateway -p 4000:4000 \ ghcr.io/graphql-hive/gateway supergraph \ http://cdn.graphql-hive.com/artifacts/v1/12713322-4f6a-459b-9d7c-8aa3cf039c2e/supergraph \ --hive-cdn-key "" \ - --hive-usage-target "" \ - --hive-usage-access-token "" + --hive-target "" \ + --hive-access-token "" ``` @@ -59,8 +59,8 @@ docker run --rm --name hive-gateway -p 4000:4000 \ npx hive-gateway supergraph \ http://cdn.graphql-hive.com/artifacts/v1/12713322-4f6a-459b-9d7c-8aa3cf039c2e/supergraph \ --hive-cdn-key "" \ - --hive-usage-target "" \ - --hive-usage-access-token "" + --hive-target "" \ + --hive-access-token "" ``` diff --git a/packages/web/docs/src/content/get-started/apollo-federation.mdx b/packages/web/docs/src/content/get-started/apollo-federation.mdx index 9b9bfdc503..98c8968817 100644 --- a/packages/web/docs/src/content/get-started/apollo-federation.mdx +++ b/packages/web/docs/src/content/get-started/apollo-federation.mdx @@ -650,8 +650,8 @@ permissions. hive-gateway supergraph \ "" \ --hive-cdn-key "" \ - --hive-usage-target "" \ - --hive-usage-access-token "" + --hive-target "" \ + --hive-access-token "" ``` | Parameter | Description | @@ -674,8 +674,8 @@ docker run --name hive-gateway -rm \ ghcr.io/graphql-hive/gateway supergraph \ "" \ --hive-cdn-key "" \ - --hive-usage-target "" \ - --hive-usage-access-token "" + --hive-target "" \ + --hive-access-token "" ``` | Parameter | Description | @@ -694,8 +694,8 @@ docker run --name hive-gateway -rm \ npx hive-gateway supergraph \ "" \ --hive-cdn-key "" \ - --hive-usage-target "" \ - --hive-usage-access-token "" + --hive-target "" \ + --hive-access-token "" ``` | Parameter | Description | diff --git a/packages/web/docs/src/content/logger.mdx b/packages/web/docs/src/content/logger.mdx new file mode 100644 index 0000000000..3a91d70604 --- /dev/null +++ b/packages/web/docs/src/content/logger.mdx @@ -0,0 +1,870 @@ +import { Callout } from '@theguild/components' + +# Hive Logger + +Lightweight and customizable logging utility designed for use within the GraphQL Hive ecosystem. It +provides structured logging capabilities, making it easier to debug and monitor applications +effectively. + +## Compatibility + +The Hive Logger is designed to work seamlessly in all JavaScript environments, including Node.js, +browsers, and serverless platforms. Its lightweight design ensures minimal overhead, making it +suitable for a wide range of applications. + +## Getting Started + +### Install + +```sh npm2yarn +npm i @graphql-hive/logger +``` + +### Basic Usage + +Create a default logger that set to the `info` log level writing to the console. + +```ts +import { Logger } from '@graphql-hive/logger' + +const log = new Logger() + +log.debug('I wont be logged by default') + +log.info({ some: 'attributes' }, 'Hello %s!', 'world') + +const child = log.child({ requestId: '123-456' }) + +child.warn({ more: 'attributes' }, 'Oh hello child!') + +const err = new Error('Woah!') + +child.error({ err }, 'Something went wrong!') +``` + +Will produce the following output to the console output: + +{/* prettier-ignore-start */} +```sh +2025-04-10T14:00:00.000Z INF Hello world! + some: "attributes" +2025-04-10T14:00:00.000Z WRN Oh hello child! + requestId: "123-456" + more: "attributes" +2025-04-10T14:00:00.000Z ERR Something went wrong! + requestId: "123-456" + err: { + stack: "Error: Woah! + at (/project/example.js:13:13) + at ModuleJob.run (node:internal/modules/esm/module_job:274:25) + at async onImport.tracePromise.__proto__ (node:internal/modules/esm/loader:644:26) + at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:98:5)" + message: "Woah!" + name: "Error" + class: "Error" + } +``` +{/* prettier-ignore-end */} + +or if you wish to have JSON output, set the `LOG_JSON` environment variable to a truthy value: + +{/* prettier-ignore-start */} +```sh +$ LOG_JSON=1 node example.js + +{"some":"attributes","level":"info","msg":"Hello world!","timestamp":"2025-04-10T14:00:00.000Z"} +{"requestId":"123-456","more":"attributes","level":"info","msg":"Hello child!","timestamp":"2025-04-10T14:00:00.000Z"} +{"requestId":"123-456","err":{"stack":"Error: Woah!\n at (/project/example.js:13:13)\n at ModuleJob.run (node:internal/modules/esm/module_job:274:25)\n at async onImport.tracePromise.__proto__ (node:internal/modules/esm/loader:644:26)\n at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:98:5)","message":"Woah!","name":"Error","class":"Error"},"level":"error","msg":"Something went wrong!","timestamp":"2025-04-10T14:00:00.000Z"} +``` +{/* prettier-ignore-end */} + +## Logging Methods and Their Arguments + +Hive Logger provides convenient methods for each log level: `trace`, `debug`, `info`, `warn`, and +`error`. + +All logging methods support flexible argument patterns for structured and formatted logging: + +### No Arguments + +Logs an empty message at the specified level. + +```ts +log.debug() +``` + +```sh +2025-04-10T14:00:00.000Z DBG +``` + +### Attributes Only + +Logs structured attributes without a message. + +```ts +log.info({ hello: 'world' }) +``` + +{/* prettier-ignore-start */} +```sh +2025-04-10T14:00:00.000Z INF + hello: "world" +``` +{/* prettier-ignore-end */} + +### Message with Interpolation + +Logs a formatted message, similar to printf-style formatting. Read more about it in the +[Message Formatting section](#message-formatting). + +```ts +log.warn('Hello %s!', 'World') +``` + +{/* prettier-ignore-start */} +```sh +2025-04-10T14:00:00.000Z WRN Hello World! +``` +{/* prettier-ignore-end */} + +### Attributes and Message (with interpolation) + +Logs structured attributes and a formatted message. The attributes can be anything object-like, +including classes. + +```ts +const err = new Error('Something went wrong!') +log.error(err, 'Problem occurred at %s', new Date()) +``` + +{/* prettier-ignore-start */} +```sh +2025-04-10T14:00:00.000Z ERR Problem occurred at Thu Apr 10 2025 14:00:00 GMT+0200 (Central European Summer Time) + stack: "Error: Something went wrong! + at (/projects/example.js:2:1)" + message: "Something went wrong!" + name: "Error" + class: "Error" +``` +{/* prettier-ignore-end */} + +## Message Formatting + +The Hive Logger uses the +[`quick-format-unescaped` library](https://github.com/pinojs/quick-format-unescaped) to format log +messages that include interpolation (e.g., placeholders like %s, %d, etc.). + +```ts +import { Logger } from '@graphql-hive/logger' + +const log = new Logger() + +log.info('hello %s %j %d %o', 'world', { obj: true }, 4, { another: 'obj' }) +``` + +Outputs: + +```sh +2025-04-10T14:00:00.000Z INF hello world {"obj":true} 4 {"another":"obj"} +``` + +Available interpolation placeholders are: + +- `%s` - string +- `%d` and `%f` - number with(out) decimals +- `%i` - integer number +- `%o`,`%O` and `%j` - JSON stringified object +- `%%` - escaped percentage sign + +## Log Levels + +The default logger uses the `info` log level which will make sure to log only `info`+ logs. +Available log levels are: + +- false (disables logging altogether) +- `trace` +- `debug` +- `info` _default_ +- `warn` +- `error` + +### Lazy Arguments and Performance + +Hive Logger supports "lazy" attributes for log methods. If you pass a function as the attributes +argument, it will only be evaluated if the log level is enabled and the log will actually be +written. This avoids unnecessary computation for expensive attributes when the log would be ignored +due to the current log level. + +```ts +import { Logger } from '@graphql-hive/logger' + +const log = new Logger({ level: 'info' }) + +log.debug( + // This function will NOT be called, since 'debug' is below the current log level. + () => ({ expensive: computeExpensiveValue() }), + 'This will not be logged' +) + +log.info( + // This function WILL be called, since 'info' log level is set. + () => ({ expensive: computeExpensiveValue() }), + 'This will be logged' +) +``` + +### Change on Creation + +When creating an instance of the logger, you can configure the logging level by configuring the +`level` option. Like this: + +```ts +import { Logger } from '@graphql-hive/logger' + +const log = new Logger({ level: 'debug' }) + +log.trace( + // you can suply "lazy" attributes which wont be evaluated unless the log level allows logging + () => ({ + wont: 'be evaluated', + some: expensiveOperation() + }), + 'Wont be logged and attributes wont be evaluated' +) + +log.debug('Hello world!') + +const child = log.child('[prefix] ') + +child.debug('Child loggers inherit the parent log level') +``` + +Outputs the following to the console: + +{/* prettier-ignore-start */} +```sh +2025-04-10T14:00:00.000Z DBG Hello world! +2025-04-10T14:00:00.000Z DBG [prefix] Child loggers inherit the parent log level +``` +{/* prettier-ignore-end */} + +### Change Dynamically + +Alternatively, you can change the logging level dynamically during runtime. There's two possible +ways of doing that. + +#### Using `log.setLevel(level: LogLevel)` + +One way of doing it is by using the log's `setLevel` method. + +```ts +import { Logger } from '@graphql-hive/logger' + +const log = new Logger({ level: 'debug' }) + +log.debug('Hello world!') + +const child = log.child('[prefix] ') + +child.debug('Child loggers inherit the parent log level') + +log.setLevel('trace') + +log.trace(() => ({ hi: 'there' }), 'Now tracing is logged too!') + +child.trace('Also on the child logger') + +child.setLevel('info') + +log.trace('Still logging!') + +child.debug('Wont be logged because the child has a different log level now') + +child.info('Hello child!') +``` + +Outputs the following to the console: + +{/* prettier-ignore-start */} +```sh +2025-04-10T14:00:00.000Z DBG Hello world! +2025-04-10T14:00:00.000Z DBG [prefix] Child loggers inherit the parent log level +2025-04-10T14:00:00.000Z TRC Now tracing is logged too! + hi: "there" +2025-04-10T14:00:00.000Z TRC [prefix] Also on the child logger +2025-04-10T14:00:00.000Z TRC Still logging! +2025-04-10T14:00:00.000Z INF Hello child! +``` +{/* prettier-ignore-end */} + +#### Using `LoggerOptions.level` Function + +Another way of doing it is to pass a function to the `level` option when creating a logger. + +```ts +import { Logger } from '@graphql-hive/logger' + +let isDebug = false + +const log = new Logger({ + level: () => { + if (isDebug) { + return 'debug' + } + return 'info' + } +}) + +log.debug('isDebug is false, so this wont be logged') + +log.info('Hello world!') + +const child = log.child('[scoped] ') + +child.debug('Child loggers inherit the parent log level function, so this wont be logged either') + +// enable debug mode +isDebug = true + +child.debug('Now debug is enabled and logged') +``` + +Outputs the following: + +{/* prettier-ignore-start */} +```sh +2025-04-10T14:00:00.000Z INF Hello world! +2025-04-10T14:00:00.000Z DBG [scoped] Now debug is enabled and logged +``` +{/* prettier-ignore-end */} + +## Child Loggers + +Child loggers in Hive Logger allow you to create new logger instances that inherit configuration +(such as log level, writers, and attributes) from their parent logger. This is useful for +associating contextual information (like request IDs or component names) with all logs from a +specific part of your application. + +When you create a child logger using the child method, you can: + +- Add a prefix to all log messages from the child logger. +- Add attributes that will be included in every log entry from the child logger. +- Inherit the log level and writers from the parent logger, unless explicitly changed on the child. + +This makes it easy to organize and structure logs in complex applications, ensuring that related +logs carry consistent context. + + + In a child logger, attributes provided in individual log calls will overwrite any attributes + inherited from the parent logger if they share the same keys. This allows you to override or add + context-specific attributes for each log entry. + + +For example, running this: + +```ts +import { Logger } from '@graphql-hive/logger' + +const log = new Logger() + +const child = log.child({ requestId: '123-456' }, '[child] ') + +child.info('Hello World!') +child.info({ requestId: 'overwritten attribute' }) + +const nestedChild = child.child({ traceId: '789-012' }, '[nestedChild] ') + +nestedChild.info('Hello Deep Down!') +``` + +Will output: + +{/* prettier-ignore-start */} +```sh +2025-04-10T14:00:00.000Z INF [child] Hello World! + requestId: "123-456" +2025-04-10T14:00:00.000Z INF [child] + requestId: "overwritten attribute" +2025-04-20T18:39:30.291Z INF [child] [nestedChild] Hello Deep Down! + requestId: "123-456" + traceId: "789-012" +``` +{/* prettier-ignore-end */} + +## Writers + +Logger writers are responsible for handling how and where log messages are output. In Hive Logger, +writers are pluggable components that receive structured log data and determine its final +destination and format. This allows you to easily customize logging behavior, such as printing logs +to the console, writing them as JSON, storing them in memory for testing, or sending them to +external systems. + +By default, Hive Logger provides several built-in writers, but you can also implement your own to +suit your application's needs. The built-ins are: + +### `MemoryLogWriter` + +Writes the logs to memory allowing you to access the logs. Mostly useful for testing. + +```ts +import { Logger, MemoryLogWriter } from '@graphql-hive/logger' + +const writer = new MemoryLogWriter() + +const log = new Logger({ writers: [writer] }) + +log.info({ my: 'attrs' }, 'Hello World!') + +console.log(writer.logs) +``` + +Outputs: + +```sh +[ { level: 'info', msg: 'Hello World!', attrs: { my: 'attrs' } } ] +``` + +### `ConsoleLogWriter` (default) + +The default log writer used by the Hive Logger. It outputs log messages to the console in a +human-friendly, colorized format, making it easy to distinguish log levels and read structured +attributes. Each log entry includes a timestamp, the log level (with color), the message, and any +additional attributes (with colored keys), which are pretty-printed and formatted for clarity. + +The writer works in both Node.js and browser-like environments, automatically disabling colors if +not supported. This makes `ConsoleLogWriter` ideal for all cases, providing clear and readable logs +out of the box. + +```ts +import { ConsoleLogWriter, Logger } from '@graphql-hive/logger' + +const writer = new ConsoleLogWriter({ + noColor: true, // defaults to env.NO_COLOR. read more: https://no-color.org/ + noTimestamp: true +}) + +const log = new Logger({ writers: [writer] }) + +log.info({ my: 'attrs' }, 'Hello World!') +``` + +Outputs: + +{/* prettier-ignore-start */} +```sh +INF Hello World! + my: "attrs" +``` +{/* prettier-ignore-end */} + +#### Disabling Colors + +You can disable colors in the console output by setting the `NO_COLOR=1` environment variable. All +environments that need the logger to not color the output will automatically set this following the +[NO_COLOR convention](https://no-color.org/). + +### `JSONLogWriter` + +Will be used then the `LOG_JSON=1` environment variable is provided. + +Built-in log writer that outputs each log entry as a structured JSON object. When used, it prints +logs to the console in JSON format, including all provided attributes, the log level, message, and a +timestamp. + +In the JSONLogWriter implementation, any attributes you provide with the keys `msg`, `timestamp`, or +`level` will be overwritten in the final log output. This is because the writer explicitly sets +these fields when constructing the log object. If you include these keys in your attributes, their +values will be replaced by the logger's own values in the JSON output. + +If the `LOG_JSON_PRETTY=1` environment variable is provided, the output will be pretty-printed for +readability; otherwise, it is compact. + +This writer's format is ideal for machine parsing, log aggregation, or integrating with external +logging systems, especially useful for production environments or when logs need to be consumed by +other tools. + +```ts +import { JSONLogWriter, Logger } from '@graphql-hive/logger' + +const log = new Logger({ writers: [new JSONLogWriter()] }) + +log.info({ my: 'attrs' }, 'Hello World!') +``` + +Outputs: + +{/* prettier-ignore-start */} +```sh +{"my":"attrs","level":"info","msg":"Hello World!","timestamp":"2025-04-10T14:00:00.000Z"} +``` +{/* prettier-ignore-end */} + +Or pretty printed: + +{/* prettier-ignore-start */} +```sh +$ LOG_JSON_PRETTY=1 node example.js + +{ + "my": "attrs", + "level": "info", + "msg": "Hello World!", + "timestamp": "2025-04-10T14:00:00.000Z" +} +``` +{/* prettier-ignore-end */} + +### Optional Writers + +Hive Logger includes some writers for common loggers of the JavaScript ecosystem with optional peer +dependencies. + +#### `LogTapeLogWriter` + +Use the [`LogTape` logger library](https://logtape.org/) for writing Hive Logger's logs. + +`@logtape/logtape` is an optional peer dependency, so you must install it first. + +```sh npm2yarn +npm i @logtape/logtape +``` + +```ts +import { Logger } from '@graphql-hive/logger' +import { LogTapeLogWriter } from '@graphql-hive/logger/writers/logtape' +import { configure, getConsoleSink } from '@logtape/logtape' + +await configure({ + sinks: { console: getConsoleSink() }, + loggers: [{ category: 'hive-gateway', sinks: ['console'] }] +}) + +const log = new Logger({ writers: [new LogTapeLogWriter()] }) + +log.info({ some: 'attributes' }, 'hello world') +``` + +```sh +14:00:00.000 INF hive-gateway hello world +``` + +#### `PinoLogWriter` (Node.js Only) + +Use the [Node.js `pino` logger library](https://github.com/pinojs/pino) for writing Hive Logger's +logs. + +`pino` is an optional peer dependency, so you must install it first. + +```sh npm2yarn +npm i pino pino-pretty +``` + +```ts +import pino from 'pino' +import { Logger } from '@graphql-hive/logger' +import { PinoLogWriter } from '@graphql-hive/logger/writers/pino' + +const pinoLogger = pino({ + transport: { + target: 'pino-pretty' + } +}) + +const log = new Logger({ writers: [new PinoLogWriter(pinoLogger)] }) + +log.info({ some: 'attributes' }, 'hello world') +``` + +{/* prettier-ignore-start */} +```sh +[14:00:00.000] INFO (20744): hello world + some: "attributes" +``` +{/* prettier-ignore-end */} + +#### `WinstonLogWriter` (Node.js Only) + +Use the [Node.js `winston` logger library](https://github.com/winstonjs/winston) for writing Hive +Logger's logs. + +`winston` is an optional peer dependency, so you must install it first. + +```sh npm2yarn +npm i winston +``` + +```ts +import winston from 'winston' +import { Logger } from '@graphql-hive/logger' +import { WinstonLogWriter } from '@graphql-hive/logger/writers/winston' + +const winstonLogger = winston.createLogger({ + transports: [new winston.transports.Console()] +}) + +const log = new Logger({ writers: [new WinstonLogWriter(winstonLogger)] }) + +log.info({ some: 'attributes' }, 'hello world') +``` + +```sh +{"level":"info","message":"hello world","some":"attributes"} +``` + + + Winston logger does not have a "trace" log level. Hive Logger will instead use "verbose" when + writing logs to Winston. + + +### Custom Writers + +You can implement custom log writers for the Hive Logger by creating a class that implements the +`LogWriter` interface. This interface requires a single `write` method, which receives the log +level, attributes, and message and an optional `flush` method allowing you to ensure all writer jobs +are completed when the logger is flushed. + +Your writer can perform any action, such as sending logs to a file, external service, or custom +destination. + +Writers can be synchronous (returning `void`) or asynchronous (returning a `Promise`). If your +writer performs asynchronous operations (like network requests or file writes), simply return a +promise from the `write` method. + +Furthermore, you can optionally implement the `flush` method to ensure that all pending writes are +completed before the logger is disposed or flushed. This is particularly useful for asynchronous +writers that need to ensure all logs are written before the application exits or the logger is no +longer needed. + +```ts +import { Attributes, LogLevel } from '@graphql-hive/logger' + +interface LogWriter { + write( + level: LogLevel, + attrs: Attributes | null | undefined, + msg: string | null | undefined + ): void | Promise + flush?(): void | Promise +} +``` + +#### Example of HTTP Writer + +```ts +import { Attributes, ConsoleLogWriter, Logger, LogLevel, LogWriter } from '@graphql-hive/logger' + +class HTTPLogWriter implements LogWriter { + async write(level: LogLevel, attrs: Attributes, msg: string) { + await fetch('https://my-log-service.com', { + method: 'POST', + headers: { 'content-type': 'application/json' }, + body: JSON.stringify({ level, attrs, msg }) + }) + } +} + +const log = new Logger({ + // send logs both to the HTTP loggging service and output them to the console + writers: [new HTTPLogWriter(), new ConsoleLogWriter()] +}) + +log.info('Hello World!') + +await log.flush() // make sure all async writes settle +``` + +#### Example of Daily File Log Writer (Node.js Only) + +Here is an example of a custom log writer that writes logs to a daily log file. It will write to a +file for each day in a given directory. + +```ts filename="daily-file-log-writer.ts" +import fs from 'node:fs/promises' +import path from 'node:path' +import { Attributes, jsonStringify, LogLevel, LogWriter } from '@graphql-hive/logger' + +export class DailyFileLogWriter implements LogWriter { + constructor( + private dir: string, + private name: string + ) {} + write(level: LogLevel, attrs: Attributes | null | undefined, msg: string | null | undefined) { + const date = new Date().toISOString().split('T')[0] + const logfile = path.resolve(this.dir, `${this.name}_${date}.log`) + return fs.appendFile(logfile, jsonStringify({ level, msg, attrs })) + } +} +``` + +#### Flushing and Non-Blocking Logging + +The logger does not block when you log asynchronously. Instead, it tracks all pending async writes +internally. When you call `log.flush()` it waits for all pending writes to finish, ensuring no logs +are lost on shutdown. During normal operation, logging remains fast and non-blocking, even if some +writers are async. + +This design allows you to use async writers without impacting the performance of your application or +blocking the main thread. + +After all writes have been completed, the logger will call the optional `flush` method on the +writers, executing any custom finalization logic you may have implemented. + +##### Explicit Resource Management + +The Hive Logger also supports +[Explicit Resource Management](https://github.com/tc39/proposal-explicit-resource-management). This +allows you to ensure that all pending asynchronous log writes are properly flushed before your +application exits or when the logger is no longer needed. + +You can use the logger with `await using` (in environments that support it) to wait for all log +operations to complete. This is especially useful in serverless or short-lived environments where +you want to guarantee that no logs are lost due to unfinished asynchronous operations. + +```ts +import { Attributes, ConsoleLogWriter, Logger, LogLevel, LogWriter } from '@graphql-hive/logger' + +class HTTPLogWriter implements LogWriter { + async write(level: LogLevel, attrs: Attributes, msg: string) { + await fetch('https://my-log-service.com', { + method: 'POST', + headers: { 'content-type': 'application/json' }, + body: JSON.stringify({ level, attrs, msg }) + }) + } +} + +{ + await using log = new Logger({ + // send logs both to the HTTP loggging service and output them to the console + writers: [new HTTPLogWriter(), new ConsoleLogWriter()] + }) + + log.info('Hello World!') +} + +// logger went out of scope and all of the logs have been flushed +``` + +##### Handling Async Write Errors + +The Logger handles write errors for asynchronous writers by tracking all write promises. When +`await log.flush()` is called (including during async disposal), it waits for all pending writes to +settle. If any writes fail (i.e., their promises reject), their errors are collected and after all +writes have settled, if there were any errors, an `AggregateError` is thrown containing all the +individual write errors. + +```ts +import { Logger } from './Logger' + +let i = 0 +const log = new Logger({ + writers: [ + { + async write() { + i++ + throw new Error('Write failed! #' + i) + } + } + ] +}) + +// no fail during logs +log.info('hello') +log.info('world') + +try { + await log.flush() +} catch (e) { + // flush will fail with each individually failed writes + console.error(e) +} +``` + +Outputs: + +```sh +AggregateError: Failed to flush 2 writes + at async (/project/example.js:20:3) { + [errors]: [ + Error: Write failed! #1 + at Object.write (/project/example.js:9:15), + Error: Write failed! #2 + at Object.write (/project/example.js:9:15) + ] +} +``` + +## Advanced Serialization of Attributes + +Hive Logger uses advanced serialization to ensure that all attributes are logged safely and +readably, even when they contain complex or circular data structures. This means you can log rich, +nested objects or errors as attributes without worrying about serialization failures or unreadable +logs. + +For example, the logger will serialize the error object, including its message and stack, in a safe +and readable way. This advanced serialization is applied automatically to all attributes passed to +log methods, child loggers, and writers. + +```ts +import { Logger } from '@graphql-hive/logger' + +const log = new Logger() + +class DatabaseError extends Error { + constructor(message: string) { + super(message) + this.name = 'DatabaseError' + } +} +const dbErr = new DatabaseError('Connection failed') +const userErr = new Error('Updating user failed', { cause: dbErr }) +const errs = new AggregateError([dbErr, userErr], 'Failed to update user') + +log.error(errs) +``` + +{/* prettier-ignore-start */} +```sh +2025-04-10T14:00:00.000Z ERR + stack: "AggregateError: Failed to update user + at (/project/example.js:13:14) + at ModuleJob.run (node:internal/modules/esm/module_job:274:25) + at async onImport.tracePromise.__proto__ (node:internal/modules/esm/loader:644:26) + at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:98:5)" + message: "Failed to update user" + errors: [ + { + stack: "DatabaseError: Connection failed + at (/project/example.js:11:15) + at ModuleJob.run (node:internal/modules/esm/module_job:274:25) + at async onImport.tracePromise.__proto__ (node:internal/modules/esm/loader:644:26) + at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:98:5)" + message: "Database connection failed" + name: "DatabaseError" + class: "DatabaseError" + } + { + stack: "Error: Updating user failed + at (/project/example.js:12:17) + at ModuleJob.run (node:internal/modules/esm/module_job:274:25) + at async onImport.tracePromise.__proto__ (node:internal/modules/esm/loader:644:26) + at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:98:5)" + message: "Updating user failed" + cause: { + stack: "DatabaseError: Connection failed + at (/project/example.js:11:15) + at ModuleJob.run (node:internal/modules/esm/module_job:274:25) + at async onImport.tracePromise.__proto__ (node:internal/modules/esm/loader:644:26) + at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:98:5)" + message: "Database connection failed" + name: "DatabaseError" + class: "DatabaseError" + } + name: "Error" + class: "Error" + } + ] + name: "AggregateError" + class: "AggregateError" +``` +{/* prettier-ignore-end */} diff --git a/packages/web/docs/src/content/migration-guides/_meta.ts b/packages/web/docs/src/content/migration-guides/_meta.ts index 92dfe0d557..627ca78a0b 100644 --- a/packages/web/docs/src/content/migration-guides/_meta.ts +++ b/packages/web/docs/src/content/migration-guides/_meta.ts @@ -1,3 +1,4 @@ export default { 'organization-access-tokens': 'Registry Access Tokens to Access Tokens', + 'gateway-v1-v2': 'Gateway from v1 to v2', }; diff --git a/packages/web/docs/src/content/migration-guides/gateway-v1-v2.mdx b/packages/web/docs/src/content/migration-guides/gateway-v1-v2.mdx new file mode 100644 index 0000000000..e65fe4f241 --- /dev/null +++ b/packages/web/docs/src/content/migration-guides/gateway-v1-v2.mdx @@ -0,0 +1,694 @@ +import { Callout, Tabs } from '@theguild/components' + +# Migrating Hive Gateway from v1 to v2 + +This document guides you through the process of migrating your Hive Gateway from version 1 to +version 2. It outlines the key changes, potential breaking points, and provides step-by-step +instructions to ensure a smooth transition. + +v2 includes several breaking changes and improvements over v1. The most significant changes are: + +- [Drop Support for Node v18](#drop-support-for-node-v18) +- [Multipart Requests are Disabled by Default](#multipart-requests-are-disabled-by-default) +- [Remove Mocking Plugin from built-ins](#remove-mocking-plugin-from-built-ins) +- [Disabled Automatic Forking](#disabled-automatic-forking) +- [Hive Logger](#hive-logger) +- [OpenTelemetry](#opentelemetry) +- [Subgraph Name in Execution Request](#subgraph-name-in-execution-request) +- [Renamed CLI options for Hive Usage Reporting](#renamed-cli-options-for-hive-usage-reporting) + +## Drop Support for Node v18 + +Node v18 has reached its end of life (as of 30 Apr 2025), and Hive Gateway v2, with it's +dependencies, no longer support it. The minimum Node version required to run Hive Gateway is now +Node v20. + +The following packages have been updated to their latest versions, which require Node v20 or higher: + +- `@graphql-hive/gateway` +- `@graphql-hive/gateway-runtime` +- `@graphql-mesh/fusion-runtime` +- `@graphql-mesh/hmac-upstream-signature` +- `@graphql-hive/plugin-deduplicate-request` +- `@graphql-mesh/transport-http-callback` +- `@graphql-mesh/plugin-opentelemetry` +- `@graphql-tools/executor-graphql-ws` +- `@graphql-tools/stitching-directives` +- `@graphql-mesh/plugin-prometheus` +- `@graphql-hive/plugin-aws-sigv4` +- `@graphql-mesh/transport-common` +- `@graphql-tools/executor-common` +- `@graphql-mesh/plugin-jwt-auth` +- `@graphql-mesh/transport-http` +- `@graphql-tools/batch-delegate` +- `@graphql-tools/executor-http` +- `@graphql-tools/batch-execute` +- `@graphql-mesh/transport-ws` +- `@graphql-tools/federation` +- `@graphql-tools/delegate` +- `@graphql-hive/nestjs` +- `@graphql-hive/pubsub` +- `@graphql-tools/stitch` +- `@graphql-tools/wrap` + +## Multipart Requests are Disabled by Default + +The only objective of +[GraphQL multipart request spec](https://github.com/jaydenseric/graphql-multipart-request-spec) is +to support file uploads; however, file uploads are not native to GraphQL and are generally +considered an anti-pattern. + +To enable file uploads, you need to explicitly enable the multipart support by setting the +`multipart` Hive Gateway option to `true`. + +```diff filename="gateway.config.ts" +import { defineConfig } from '@graphql-hive/gateway'; + +export const gatewayConfig = defineConfig({ ++ multipart: true, +}); +``` + +## Remove Mocking Plugin from built-ins + +There is no need to provide the `useMock` plugin alongside Hive Gateway built-ins. Not only is the +mock plugin 2MB in size (minified), but installing and using it is very simple. + +Migrating is very simple, start by installing the `@graphql-mesh/plugin-mock` package: + +```sh npm2yarn +npm i @graphql-mesh/plugin-mock +``` + +and then use it in your Hive Gateway configuration: + +```diff filename="gateway.config.ts" +import { + defineConfig, +- useMock +} from '@graphql-hive/gateway'; ++ import { useMock } from '@graphql-mesh/plugin-mock' + +export const gatewayConfig = defineConfig({ + plugins: [ + useMock({ + mocks: [ + { + apply: 'User.firstName', + faker: '{{name.firstName}}' + } + ] + }) + ] +}) +``` + + + You can read more about the mocking plugin in the [Mocking + documentation](/docs/src/content/gateway/other-features/testing/mocking). + + +## Disabled Automatic Forking + +We were previously forking workers automatically in v1 when detecting `NODE_ENV=production`; +however, forking workers for concurrent processing is a delicate process and if not done carefully +can lead to performance degradations. It should be configured with careful consideration by advanced +users. + +In v2, the automatic forking of workers has been disabled by default. This means that the Hive +Gateway will no longer automatically create child processes to handle concurrent requests. Instead, +you can manually configure forking if needed. + +You can configure forking in your `gateway.config.ts` file by using the `fork` option or using the +environment variable `FORK`. These options allow you to specify the number of worker processes to +fork. + +## Hive Logger + +The Hive Logger is a new feature in v2 that provides enhanced logging capabilities. It allows you to +log messages at different levels (info, debug, error) and provides a more structured way to handle +logs. The logger implementation now consistently uses the new `@graphql-hive/logger` package and +standardizes the logger prop naming and usage. + + + You can read more about the new logger and its features in the [Hive Logger + documentation](/docs/logger) and how it works with Hive Gateway in [Logging and Error Handling + documentation](/docs/gateway/logging-and-error-handling). + + +### Deprecating the Old Logger + +The old logger interface from `@graphql-mesh/types` or `@graphql-mesh/utils`, the `DefaultLogger` +and the `LogLevel` enum have been deprecated and will be removed in the future, after all components +are migrated to the new logger. + +```diff +- import { DefaultLogger, LogLevel } from '@graphql-mesh/utils'; +- const logger = new DefaultLogger(undefined, LogLevel.debug); ++ import { Logger } from '@graphql-hive/logger'; ++ const log = new Logger({ level: 'debug' }); +``` + +Logging uses similar methods as before, with two significant changes: + +1. The first, optional, argument of the logging methods are now the metadata +1. The message supports interpolation of all values succeeding the message + +```diff +- logger.debug(`Hello ${'world'}`, { foo: 'bar' }); ++ log.debug({ foo: 'bar' }, 'Hello %s', 'world'); +``` + +### `logging` Configuration Option + +The `logging` option has been changed to accept either: + +1. `true` to enable and log using the `info` level +1. `false` to disable logging altogether +1. A Hive Logger instance +1. A string log level (e.g., `debug`, `info`, `warn`, `error`) + +#### Changing the Log Level + + + + + +```diff filename="gateway.config.ts" +import { + defineConfig, +- LogLevel, +} from '@graphql-hive/gateway'; + +export const gatewayConfig = defineConfig({ +- logging: LogLevel.debug, ++ logging: 'debug', +}); +``` + + + + + +```diff filename="index.ts" +import { + createGatewayRuntime, +- LogLevel, +} from '@graphql-hive/runtime-gateway'; + +export const gateway = createGatewayRuntime({ +- logging: LogLevel.debug, ++ logging: 'debug', +}); +``` + + + + + +##### Dynamically Changing the Log Level + +A great new feature of the Hive Logger is the ability to change the log level dynamically at +runtime. This allows you to adjust the verbosity of logs without restarting the application. + +Please advise the +[Hive Logger documentation](/docs/gateway/logging-and-error-handling#change-dynamically) for more +details and an example. + +#### Using a Custom Logger + + + + + +```diff filename="gateway.config.ts" +import { + defineConfig, +- DefaultLogger, +- LogLevel, ++ Logger, +} from '@graphql-hive/gateway'; + +export const gatewayConfig = defineConfig({ +- logging: new DefaultLogger(undefined, LogLevel.debug), ++ logging: new Logger({ level: 'debug' }), +}); +``` + + + + + +```diff filename="index.ts" +import { + createGatewayRuntime, +- DefaultLogger, +- LogLevel, ++ Logger, +} from '@graphql-hive/gateway-runtime'; + +export const gateway = createGatewayRuntime({ +- logging: new DefaultLogger(undefined, LogLevel.debug), ++ logging: new Logger({ level: 'debug' }), +}); +``` + + + + + +### The Environment Variable + +Hive Logger will continue to support the `DEBUG=1` environment variable for enabling debug logging. + +But, additionally, it supports the new `LOG_LEVEL` environment variable for setting a specific log +level. This allows you to control the log level without modifying the code or configuration files. + +For example, setting `LOG_LEVEL=debug` will enable debug logging, while `LOG_LEVEL=warn` will set +the log level to "warn". + +#### Logging in JSON Format + +Previously, the Hive Gateway used two different environment variables to trigger loggin in JSON +format: + +- `LOG_FORMAT=json` +- `NODE_ENV=production` + +Both of those variables are now removed and replaced with `LOG_JSON=1`. + +#### Pretty Logging + +In addition to the JSON format, Hive Gateway had an additional `LOG_FORMAT=pretty` environment +variable that pretty-printed the logs. This variable has been removed. + +When using the default logger, the logs are now pretty-printed by default. This means that the logs +will be formatted in a human-readable way, making it easier to read and understand. + +Additionally, if you're using [the JSON format](#logging-in-json-format), you can use +`LOG_JSON_PRETTY=1` environment variable to enable pretty-printing the JSON logs. + +### Prop Renaming `logger` to `log` + +Throughout the codebase, the `logger` prop has been renamed to `log`. This change is part of the +standardization effort to ensure consistency across all components and plugins. The new `log` prop +is now used in all APIs, contexts, and plugin options. It's shorter and more intuitive, making it +easier to understand and use. + +#### Context + +The context object passed to plugins and hooks now uses `log` instead of `logger`. Basically, the +`GatewayConfigContext` interface has been changed to: + +```diff filename="@graphql-hive/gateway" +- import type { Logger as LegacyLogger } from '@graphql-mesh/types'; ++ import type { Logger as HiveLogger } from '@graphql-hive/logger'; + +export interface GatewayConfigContext { +- logger: LegacyLogger; ++ log: HiveLogger; + // ...rest of the properties +} +``` + +Same goes for all of the transports' contexts. Each of the transport contexts now has a `log` prop +instead of `logger`. Additionally, the logger is required and will always be provided. + +```diff filename="@graphql-mesh/transport-common" +- import type { Logger as LegacyLogger } from '@graphql-mesh/types'; ++ import type { Logger as HiveLogger } from '@graphql-hive/logger'; + +export interface TransportContext { +- logger?: LegacyLogger; ++ log: HiveLogger; + // ...rest of the properties +} +``` + +##### Plugin Setup + +```diff filename="gateway.config.ts" +import { defineConfig } from '@graphql-hive/gateway'; +import { myPlugins } from './my-plugins'; + +export const gatewayConfig = defineConfig({ + plugins(ctx) { +- ctx.logger.info('Loading plugins...'); ++ ctx.log.info('Loading plugins...'); + return [...myPlugins]; + }, +}); +``` + +##### Plugin Hooks + +Across all plugins, hooks and contexts, the `logger` prop has been renamed to `log` and will always +be provided. + +It is now the highly recommended to use the logger from the context at all times because it contains +the necessary metadata for increased observability, like the request ID or the execution step. + +```diff filename="gateway.config.ts" +import { defineConfig } from '@graphql-hive/gateway'; + +export const gatewayConfig = defineConfig({ +- plugins({ log }) { ++ plugins() { + return [ + { + onExecute({ context }) { +- log.info('Executing...'); ++ context.log.info('Executing...'); + }, + onDelegationPlan(context) { +- log.info('Creating delegation plan...'); ++ context.log.info('Creating delegation plan...'); + }, + onSubgraphExecute(context) { +- log.info('Executing on subgraph...'); ++ context.log.info('Executing on subgraph...'); + }, + onFetch({ context }) { +- log.info('Fetching data...'); ++ context.log.info('Fetching data...'); + }, + }, + ]; + }, +}); +``` + +Will log with the necessary metadata for increased observability, like this: + +``` +2025-04-10T14:00:00.000Z INF Executing... + requestId: "0b1dce69-5eb0-4d7b-97d8-1337535a620e" +2025-04-10T14:00:00.000Z INF Creating delegation plan... + requestId: "0b1dce69-5eb0-4d7b-97d8-1337535a620e" + subgraph: "accounts" +2025-04-10T14:00:00.000Z INF Executing on subgraph... + requestId: "0b1dce69-5eb0-4d7b-97d8-1337535a620e" + subgraph: "accounts" +2025-04-10T14:00:00.000Z INF Fetching data... + requestId: "0b1dce69-5eb0-4d7b-97d8-1337535a620e" +``` + +#### Affected Plugins + +##### Prometheus i.e. `usePrometheus` + +The monitoring plugin `usePrometheus` has been updated to use the new logger API. The `logger` prop +has been replaced with the `log` prop when using Hive Gateway runtime. + +```diff filename="index.ts" +import { createGatewayRuntime } from '@graphql-hive/gateway-runtime' +import usePrometheus from '@graphql-mesh/plugin-prometheus' + +export const gateway = createGatewayRuntime({ + plugins: ctx => [ + usePrometheus({ + ...ctx, +- logger: ctx.logger, ++ log: ctx.log, + }) + ] +}) +``` + +If you have been using the `usePrometheus` plugin following the example from +[Monitoring and Tracing](/docs/gateway/monitoring-tracing#usage-example-1), where the `ctx` argument +is simply spread to the plugin options - you don't have to change anything. + +#### Custom Transport + +If you have implemented and been using a custom transport of yours, you will need to update the +`logger` prop to `log` in the `getSubgraphExecutor` method. + +```diff filename="letter-transport.ts" +import type { Transport } from '@graphql-mesh/transport-common'; +import { letterExecutor } from './my-letter-executor'; + +export interface LetterTransportOptions { + shouldStamp?: boolean; +} + +export default { + getSubgraphExecutor(payload) { +- payload.logger.info('Creating letter executor...'); ++ payload.log.info('Creating letter executor...'); + return letterExecutor(payload); + }, +} satisfies Transport; +``` + +### Custom Logger Writers + +The new Hive Logger is designed to be extensible and allows you to create custom logger adapters by +implementing "log writers" instead of the complete logger interface. The `LogWriter` is simply: + +```ts +import { Attributes, LogLevel } from '@graphql-hive/logger' + +interface LogWriter { + write( + level: LogLevel, + attrs: Attributes | null | undefined, + msg: string | null | undefined + ): void | Promise + flush?(): void | Promise +} +``` + +As you may see, it's very simple and allows you, to not only use your favourite logger like pino or +winston, but also implement custom writers that send logs to a HTTP consumer or writes to a file. + + + Read more about implementing your own writers in the [Hive Logger documentation](/docs/logger). + + +### Pino (Node.js Only) + +Use the [Node.js `pino` logger library](https://github.com/pinojs/pino) for writing Hive Logger's +logs. + +`pino` is an optional peer dependency, so you must install it first. + +```sh npm2yarn +npm i pino pino-pretty +``` + +Since we're using a custom log writter, you have to install the Hive Logger package too: + +```sh npm2yarn +npm i @graphql-hive/logger +``` + +```diff filename="gateway.config.ts" +import pino from 'pino' +import { defineConfig } from '@graphql-hive/gateway' +import { Logger } from '@graphql-hive/logger' +- import { createLoggerFromPino } from '@graphql-hive/logger-pino' ++ import { PinoLogWriter } from '@graphql-hive/logger/writers/pino' + +const pinoLogger = pino({ + transport: { + target: 'pino-pretty' + } +}) + +export const gatewayConfig = defineConfig({ +- logging: createLoggerFromPino(pinoLogger) ++ logging: new Logger({ ++ writers: [new PinoLogWriter(pinoLogger)] ++ }) +}) +``` + +### Winston (Node.js Only) + +Use the [Node.js `winston` logger library](https://github.com/winstonjs/winston) for writing Hive +Logger's logs. + +`winston` is an optional peer dependency, so you must install it first. + +```sh +npm i winston +``` + +Since we're using a custom log writter, you have to install the Hive Logger package too: + +```sh npm2yarn +npm i @graphql-hive/logger +``` + +```diff filename="gateway.config.ts" +import { createLogger, format, transports } from 'winston' +import { defineConfig } from '@graphql-hive/gateway' +import { Logger } from '@graphql-hive/logger' +- import { createLoggerFromWinston } from '@graphql-hive/winston' ++ import { WinstonLogWriter } from '@graphql-hive/logger/writers/winston' + +const winstonLogger = createLogger({ + level: 'info', + format: format.combine(format.timestamp(), format.json()), + transports: [new transports.Console()] +}) + +export const gatewayConfig = defineConfig({ +- logging: createLoggerFromWinston(winstonLogger) ++ logging: new Logger({ ++ writers: [new WinstonLogWriter(winstonLogger)] ++ }) +}) +``` + +## OpenTelemetry + +OpenTelemetry integration have been re-worked to offer better traces, custom attributes and spans, +and overall compatiblity with standard OTEL API. + +For this features to be possible, we had to break the configuration API. + +You can read more about the new capabilities of the OpenTelemetry Tracing integration in the +[Hive Monitoring / Tracing documentation](/docs/gateway/monitoring-tracing) + +### CLI Options + +It is now possible to setup OpenTelemetry without a configuration file, by using the new +`--opentelemetry ` option. + +```bash +hive-gateway supergraph supergraph.graphql \ + --opentelemetry "http://localhost:4318" +``` + +By default, an HTTP OTLP exporter will be used. You can also use a GRPC one by using +`--opentelemetry-exporter-type`: + +```bash +hive-gateway supergraph supergraph.graphql \ + --opentelemetry "http://localhost:4317" \ + --opentelemetry-exporter-type otlp-grpc +``` + +### SDK Setup + +The OpenTelemetry SDK setup used to be automatically done by the plugin it self, it is no longer the +case. You have the choice to either setup it yourself using official `@opentelemetry/*` pacakges +(like official Node SDK `@opentelemetry/sdk-node`), or to use our cross-plateform setup helper +(recommended). + +Extracting OTEL setup out of the plugin allows to you to decide on the version of `opentelemetry-js` +SDK you want to use. + +Most of OTEL related settings have been moved to `openTelemetrySetup` options. + +Please refer to +[OpenTelemetry Setup documentation](/docs/gateway/monitoring-tracing#opentelemetry-setup) for more +information. + +### Tracing related configuration + +All tracing related options has been moved to a `traces` option. + +```diff filename="gateway.config" +import { denfineConfig } from '@grahpl-hive/gateway' + +export const gatewayConfig = defineConfig({ + openTelemetry: { ++ traces: { + tracer: ..., + spans: { + ... + } ++ } + } +}) +``` + +### Spans filter functions payload + +The payload given as a parameter of the span filtering functions have been restrained. + +Due to internal changes, the information available at span filtering time has been reduced to only +include (depending on the span) the GraphQL `context`, the HTTP `request` and the Upstream +`executionRequest`. + +Please refere to [Request Spans documentation](/docs/gateway/monitoring-tracing#request-spans) for +details of what is availbe for each span filter. + +### Span parenting + +Spans are now parented correctly. This can have impact on trace queries used in your dashboard. + +Please review your queries to not filter against `null` parent span id. + +### New GraphQL Operation Span + +A new span encapsulating each GraphQL operation has been added. + +It is a subspan of the HTTP request span, and encapsulate all the actual GraphQL processing. There +can be multiple GraphQL operation spans for one HTTP request span if you have enabled graphql +operation batching over http. + +### Root Context + +The OpenTelemetry Context is now modified by Hive Gateway. The Context is set with the current phase +span. This means that if you were creating custom spans in your plugin without explicitly providing +a parent context, your spans will be considered sub-spans of Hive Gateway's current span. + +To maintain your span as a root span, add an explicit parent context a creation time: + +```diff ++ import { ROOT_CONTEXT } from '@opentelemetry/api' + +export const myPlugin = () => ({ + onExecute() { + myTrace.startActiveSpan( + 'my-custom-span', + { foo: 'bar' }, ++ ROOT_CONTEXT, + () => { + // do something + } + ) + } +}) +``` + +## Subgraph Name in Execution Request + +The targeted subgraph name is now exposed as a field of `ExecutionRequest`, it is no longer needed +to use `subgraphNameFromExecutionRequest` helper utility to find out which subgraph is targeted by +an execution request. Therefore, `subgraphNameFromExecutionRequest` has been removed, since it's no +longer needed. + +```diff +- import { subgraphNameByExecutionRequest } from '@graphql-mesh/fusion-runtime'; + +const useMyPlugin = () => ({ + onFetch({ executionRequest }) { +- const subgraphName = subgraphNameByExecutionRequest.get(executionRequest) ++ const subgraphName = executionRequest.subgraphName + } +}) +``` + +## Renamed CLI options for Hive Usage Reporting + +To prepare for future observability features of Hive Console, `--hive-usage-target` has been +deprecated and you're recommended to use `--hive-target`. It will define the target for the +observability metrics _and_ usage reporting. + +In addition to this change, we've added a new option `--hive-access-token` which is used to define +the token that's to be used for both observability and usage reporting. + +```diff +hive-gateway supergraph \ + http://cdn.graphql-hive.com//supergraph \ + --hive-cdn-key "" \ +- --hive-usage-target "" \ +- --hive-usage-access-token "" ++ --hive-target "" \ ++ --hive-access-token "" +``` diff --git a/packages/web/docs/src/content/migration-guides/organization-access-tokens.mdx b/packages/web/docs/src/content/migration-guides/organization-access-tokens.mdx index 7ccb727c60..f64a81f3da 100644 --- a/packages/web/docs/src/content/migration-guides/organization-access-tokens.mdx +++ b/packages/web/docs/src/content/migration-guides/organization-access-tokens.mdx @@ -120,16 +120,16 @@ within the Hive dashboard. Please upgrade Hive Gateway to at least version [`hive-gateway@1.10.4`](https://github.com/graphql-hive/gateway/releases/tag/hive-gateway%401.10.4). -Replace the usage of the `--hive-registry-token` config flag with the `--hive-usage-target` and -`--hive-usage-access-token` flags. +Replace the usage of the `--hive-registry-token` config flag with the `--hive-target` and +`--hive-access-token` flags. ```diff hive-gateway supergraph \ "" \ --hive-cdn-key "" \ - --hive-registry-token "" -+ --hive-usage-target "my-org/my-project/my-target" \ -+ --hive-usage-access-token "hvo1/...TRUNCATED..." ++ --hive-target "my-org/my-project/my-target" \ ++ --hive-access-token "hvo1/...TRUNCATED..." ``` **Further Reading:**