diff --git a/src/content/docs/agentic-ai/mcp/overview.mdx b/src/content/docs/agentic-ai/mcp/overview.mdx index a27d881feed..d9585bd416c 100644 --- a/src/content/docs/agentic-ai/mcp/overview.mdx +++ b/src/content/docs/agentic-ai/mcp/overview.mdx @@ -38,10 +38,11 @@ The MCP server offers practical benefits for engineers and operations teams and New Relic MCP works with these AI development environments: - **Claude Code:** Command-line interface for Claude +- **Gemini CLI:** Command-line interface for Gemini +- **Kiro CLI:** Command-line interface for Kiro - **Claude Desktop:** Desktop application for interactive AI development -- **VS Code:** Integrated development environment with MCP support - **Windsurf:** Cloud-based AI development platform -- **Gemini CLI:** Command-line interface for Gemini +- **VS Code:** Integrated development environment with MCP support Each environment offers unique advantages depending on your workflow preferences. See our [setup guide](/docs/agentic-ai/mcp/setup) for platform-specific configuration instructions. diff --git a/src/content/docs/agentic-ai/mcp/setup.mdx b/src/content/docs/agentic-ai/mcp/setup.mdx index df587494ec9..14daf087b1d 100644 --- a/src/content/docs/agentic-ai/mcp/setup.mdx +++ b/src/content/docs/agentic-ai/mcp/setup.mdx @@ -46,40 +46,40 @@ Claude Code supports both OAuth (recommended) and API key authentication methods 1. Ensure you have Claude Code installed and configured. 2. Add the MCP server using the command line: -```shell -claude mcp add newrelic --transport http https://mcp.newrelic.com/mcp/ -``` - -Or, edit `~/.claude.json` as shown here: - -```json -{ - "mcpServers": { - "newrelic": { - "httpUrl": "https://mcp.newrelic.com/mcp/", - "oauth": { - "enabled": true, - "clientId": "pUWGgnjsQ0bydqCbavTPpw==", - "authorizationUrl": "https://login.newrelic.com/login", - "tokenUrl": "https://mcp.newrelic.com/oauth2/token", - "scopes": ["openid"] + ```shell + claude mcp add newrelic --transport http https://mcp.newrelic.com/mcp/ + ``` + + Or, edit `~/.claude.json` as shown here: + + ```json + { + "mcpServers": { + "newrelic": { + "httpUrl": "https://mcp.newrelic.com/mcp/", + "oauth": { + "enabled": true, + "clientId": "pUWGgnjsQ0bydqCbavTPpw==", + "authorizationUrl": "https://login.newrelic.com/login", + "tokenUrl": "https://mcp.newrelic.com/oauth2/token", + "scopes": ["openid"] + } + } } } - } -} -``` + ``` 3. Start Claude: -```shell -claude -``` + ```shell + claude + ``` 4. Authenticate: -```shell -/mcp -``` + ```shell + /mcp + ``` 5. Select the `newrelic` mcp server and press **ENTER**. @@ -89,220 +89,254 @@ claude 1. Add the MCP server using the command line: -```shell -claude mcp add newrelic https://mcp.newrelic.com/mcp/ --transport http --header "Api-Key: NRAK-YOUR-KEY-HERE" -``` - -Or, edit `~/.claude.json` as shown here: - -```json -{ - "mcpServers": { - "newrelic": { - "type": "http", - "url": "https://mcp.newrelic.com/mcp/", - "headers": { - "Api-Key": "NRAK-YOUR-KEY-HERE" - } - } - } -} -``` + ```shell + claude mcp add newrelic https://mcp.newrelic.com/mcp/ --transport http --header "Api-Key: NRAK-YOUR-KEY-HERE" + ``` + + Or, edit `~/.claude.json` as shown here: + + ```json + { + "mcpServers": { + "newrelic": { + "type": "http", + "url": "https://mcp.newrelic.com/mcp/", + "headers": { + "Api-Key": "NRAK-YOUR-KEY-HERE" + } + } + } + } + ``` 2. Verify that the MCP server is listed: -```shell -claude mcp list -``` + ```shell + claude mcp list + ``` 3. Start Claude: -```shell -claude -``` + ```shell + claude + ``` -## Claude Desktop setup [#claude-desktop] +## Gemini CLI setup [#gemini-cli] -Claude Desktop requires the `mcp-remote` proxy for `OAuth` authentication. Ensure you have `Node.js` installed on your computer for npx to work. +### OAuth method [#oauth] -### OAuth method (recommended) [#oauth] +1. Edit the settings file: `~/.gemini/settings.json` (Linux/macOS) or `%APPDATA%\Gemini\settings.json` (Windows). -1. Create the config directory (if needed): + ```json + { + "theme": "Default", + "selectedAuthType": "oauth-personal", + "mcpServers": { + "newrelic": { + "httpUrl": "https://mcp.newrelic.com/mcp/", + "oauth": { + "enabled": true, + "clientId": "pUWGgnjsQ0bydqCbavTPpw==", + "authorizationUrl": "https://login.newrelic.com/login", + "tokenUrl": "https://mcp.newrelic.com/oauth2/token", + "scopes": ["openid"] + } + } + } + } + ``` -```shell -mkdir -p "~/Library/Application Support/Claude" -``` +2. Start Gemini CLI: -2. Edit the config file: `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows). + ```shell + gemini + ``` -```json -{ - "mcpServers": { - "new-relic-mcp": { - "command": "npx", - "args": [ - "mcp-remote", - "https://mcp.newrelic.com/mcp/" - ] - } - } -} -``` +3. Authenticate: -3. Restart Claude Desktop. + ```shell + /mcp auth newrelic + ``` ### API key method [#api-key] -1. Add the MCP server by editing `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows) as shown here: +1. Add the MCP server by editing `~/.gemini/settings.json` (Linux/macOS) or `%APPDATA%\Gemini\settings.json` (Windows) as shown here: -#### For macOS -```json -{ - "mcpServers": { - "newrelic": { - "command": "npx", - "args": [ - "-y", - "mcp-remote", - "https://mcp.newrelic.com/mcp/", - "--transport", "http", - "--header", "api-key: NRAK-xxxx" - ] - } - } -} -``` -#### For Windows -```json -{ - "mcpServers": { - "newrelic": { - "command": "C:\\PROGRA~1\\nodejs\\npx.cmd", - "args": [ - "-y", - "mcp-remote", - "https://mcp.newrelic.com/mcp/", - "--transport", "http", - "--header", "api-key: NRAK-xxxx" - ] + ```json + { + "theme": "Default", + "selectedAuthType": "oauth-personal", + "mcpServers": { + "newrelic": { + "url": "https://mcp.newrelic.com/mcp/", + "headers": { + "api-key": "NRAK-YOUR-KEY-HERE" + } + } + } } - } -} -``` - -2. Restart Claude Desktop. On Windows, changes may not take effect until the Claude application is closed via Task Manager. + ``` -## Windsurf setup [#windsurf] - -### OAuth method (via mcp-remote proxy) [#oauth] +2. Start Gemini CLI: -1. Edit the config file, `~/.codeium/windsurf/mcp_config.json`, as shown here: + ```shell + gemini + ``` -```json -{ - "mcpServers": { - "newrelic-oauth": { - "command": "npx", - "args": [ - "-y", - "mcp-remote", - "https://mcp.newrelic.com/mcp/" - ] - } - } -} -``` +3. Authenticate: -2. Restart Windsurf. + ```shell + /mcp auth newrelic + ``` -3. Check MCP servers using the hammer icon in the Cascade panel. +## Kiro CLI setup [#kiro-cli] -### API key method [#api-key] +### OAuth method (recommended) [#oauth] -1. Add the MCP server by editing `~/.codeium/windsurf/mcp_config.json` as shown here: +1. Ensure you have the [Kiro CLI tool installed](https://kiro.dev/cli/) on your machine. +2. Configure the MCP server by opening (or creating) the configuration file located at `~/.kiro/settings/mcp.json` and add the following configuration: -```json -{ - "mcpServers": { - "newrelic-api": { - "serverUrl": "https://mcp.newrelic.com/mcp/", - "headers": { - "api-key": "NRAK-YOUR-API-KEY" + ```json + { + "mcpServers": { + "newrelic-mcp-server": { + "url": "https://mcp.newrelic.com/mcp/", + "oauth": { + "authorizationUrl": "https://login.newrelic.com/oauth2/authorize", + "tokenUrl": "https://login.newrelic.com/oauth2/token", + "scopes": ["openid", "profile", "mcp:access"], + "usePKCE": true + }, + "disabled": false } } } -} -``` + ``` -2. Restart Windsurf. +3. Authenticate and connect: + - Open your terminal and run `kiro-cli`. + - Type the command `/mcp` and hit enter. + - You will see the New Relic server listed with a URL. Open that URL in your browser. + - Complete the standard New Relic login process. + - After you're authenticated, the browser will confirm success. -## Gemini CLI setup [#gemini-cli] +## Claude Desktop setup [#claude-desktop] -### OAuth method [#oauth] +Claude Desktop requires the `mcp-remote` proxy for `OAuth` authentication. Ensure you have `Node.js` installed on your computer for npx to work. -1. Edit the settings file: `~/.gemini/settings.json` (Linux/macOS) or `%APPDATA%\Gemini\settings.json` (Windows). +### OAuth method (recommended) [#oauth] -```json -{ - "theme": "Default", - "selectedAuthType": "oauth-personal", - "mcpServers": { - "newrelic": { - "httpUrl": "https://mcp.newrelic.com/mcp/", - "oauth": { - "enabled": true, - "clientId": "pUWGgnjsQ0bydqCbavTPpw==", - "authorizationUrl": "https://login.newrelic.com/login", - "tokenUrl": "https://mcp.newrelic.com/oauth2/token", - "scopes": ["openid"] +1. Create the config directory (if needed): + + ```shell + mkdir -p "~/Library/Application Support/Claude" + ``` + +2. Edit the config file: `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows). + + ```json + { + "mcpServers": { + "new-relic-mcp": { + "command": "npx", + "args": [ + "mcp-remote", + "https://mcp.newrelic.com/mcp/" + ] + } } } - } -} -``` + ``` -2. Start Gemini CLI: +3. Restart Claude Desktop. -```shell -gemini -``` +### API key method [#api-key] -3. Authenticate: +1. Add the MCP server by editing `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows) as shown here: -```shell -/mcp auth newrelic -``` + - For macOS: + + ```json + { + "mcpServers": { + "newrelic": { + "command": "npx", + "args": [ + "-y", + "mcp-remote", + "https://mcp.newrelic.com/mcp/", + "--transport", "http", + "--header", "api-key: NRAK-xxxx" + ] + } + } + } + ``` + + - For Windows: + + ```json + { + "mcpServers": { + "newrelic": { + "command": "C:\\PROGRA~1\\nodejs\\npx.cmd", + "args": [ + "-y", + "mcp-remote", + "https://mcp.newrelic.com/mcp/", + "--transport", "http", + "--header", "api-key: NRAK-xxxx" + ] + } + } + } + ``` -### API key method [#api-key] +2. Restart Claude Desktop. On Windows, changes may not take effect until the Claude application is closed via Task Manager. -1. Add the MCP server by editing `~/.gemini/settings.json` (Linux/macOS) or `%APPDATA%\Gemini\settings.json` (Windows) as shown here: +## Windsurf setup [#windsurf] -```json -{ - "theme": "Default", - "selectedAuthType": "oauth-personal", - "mcpServers": { - "newrelic": { - "url": "https://mcp.newrelic.com/mcp/", - "headers": { - "api-key": "NRAK-YOUR-KEY-HERE" +### OAuth method (via mcp-remote proxy) [#oauth] + +1. Edit the config file, `~/.codeium/windsurf/mcp_config.json`, as shown here: + + ```json + { + "mcpServers": { + "newrelic-oauth": { + "command": "npx", + "args": [ + "-y", + "mcp-remote", + "https://mcp.newrelic.com/mcp/" + ] + } } } - } -} -``` + ``` -2. Start Gemini CLI: +2. Restart Windsurf. -```shell -gemini -``` +3. Check MCP servers using the hammer icon in the Cascade panel. -3. Authenticate: +### API key method [#api-key] -```shell -/mcp auth newrelic -``` +1. Add the MCP server by editing `~/.codeium/windsurf/mcp_config.json` as shown here: + + ```json + { + "mcpServers": { + "newrelic-api": { + "serverUrl": "https://mcp.newrelic.com/mcp/", + "headers": { + "api-key": "NRAK-YOUR-API-KEY" + } + } + } + } + ``` + +2. Restart Windsurf. ## VS Code setup [#vs-code] VS Code remains fully supported with both authentication methods. You need VS Code (version 1.60 or later). @@ -311,16 +345,16 @@ VS Code remains fully supported with both authentication methods. You need VS Co Add the MCP server by editing `.vscode/mcp.json` as shown here: -```json -{ - "servers": { - "new-relic-mcp": { - "url": "https://mcp.newrelic.com/mcp/", - "type": "http" + ```json + { + "servers": { + "new-relic-mcp": { + "url": "https://mcp.newrelic.com/mcp/", + "type": "http" + } } } -} -``` + ``` ### API key method [#api-key] @@ -341,12 +375,15 @@ Add the MCP server by editing `.vscode/mcp.json` as shown here: ### Setup steps 1. Create or open `mcp.json` in your VS Code workspace: -- If you don't have a `.vscode` directory, create one in your project root. - - Inside `.vscode`, create a file named `mcp.json`. - - Add your authentication configuration using one of the methods above. + + - If you don't have a `.vscode` directory, create one in your project root. + - Inside `.vscode`, create a file named `mcp.json`. + - Add your authentication configuration using one of the methods above. + 2. Start server: - - Ensure your `mcp.json` file is open in the editor. - - Look for a clickable link (CodeLens) above your server configuration. Click it to start the MCP server. + + - Ensure your `mcp.json` file is open in the editor. + - Look for a clickable link (CodeLens) above your server configuration. Click it to start the MCP server. Once connected, you can interact with New Relic AI using natural language prompts. The MCP server will manage authentication and data retrieval from your New Relic account. diff --git a/src/i18n/content/es/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent.mdx b/src/i18n/content/es/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent.mdx index d6d7999491a..b94ea153852 100644 --- a/src/i18n/content/es/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent.mdx +++ b/src/i18n/content/es/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent.mdx @@ -141,7 +141,7 @@ El agente instrumentó automáticamente estos framework y biblioteca: * Rocíe 1.3.1 a la última versión * Tomcat 7.0.0 a la última versión * Undertow 1.1.0.Final a la última versión - * WebLogic 12.1.2.1 a 12.2.x (exclusivo) + * WebLogic 12.1.2.1 a 14.1.1 * WebSphere 8 a 9 (exclusivo) * WebSphere Liberty 8.5 a la última versión * Wildfly 8.0.0.Final a la última versión diff --git a/src/i18n/content/es/docs/cci/azure-cci.mdx b/src/i18n/content/es/docs/cci/azure-cci.mdx index 6119765a600..9f6638caf63 100644 --- a/src/i18n/content/es/docs/cci/azure-cci.mdx +++ b/src/i18n/content/es/docs/cci/azure-cci.mdx @@ -395,7 +395,7 @@ Antes de conectar Azure a Inteligencia Artificial en la nube, cerciorar de tener - Ingrese la ruta base donde se almacenan los datos de facturación dentro del contenedor (por ejemplo, `20251001-20251031` para octubre de 2025). **Nota**: Si su exportación de facturación se publica directamente en la raíz del contenedor, deje este campo vacío. + Ingrese la ruta relativa en el contenedor desde el cual ve las caídas de datos de facturación en un formato mensual (por ejemplo, `20251101-20251130` para noviembre de 2025 o `20251201-20251231` para diciembre de 2025). **Nota**: Si su exportación de facturación se publica directamente en la raíz del contenedor, deje este campo vacío. diff --git a/src/i18n/content/es/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx b/src/i18n/content/es/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx index 8829dbc5cd6..df4a5636ee3 100644 --- a/src/i18n/content/es/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx +++ b/src/i18n/content/es/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx @@ -26,28 +26,26 @@ Nuestra integración con Snowflake le permite recopilar datos completos sobre va ## Configuración de métricas Snowflake - Ejecute el siguiente comando para almacenar Snowflake métrica en formato JSON, permitiendo que nri-flex lo lea. Cerciorar de modificar ACCOUNT, USERNAME y SNOWSQL\_PWD según corresponda. + Ejecute el siguiente comando para almacenar las métricas de Snowflake en formato JSON, lo que permite que nri-flex lo lea. Asegúrese de modificar `ACCOUNT`, `USERNAME` y `SNOWSQL_PWD` en consecuencia. ```shell - - # Run the below command as a 1 minute cronjob - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SERVICE_TYPE, NAME, AVG(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_AVERAGE", SUM(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_SUM", AVG(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_AVERAGE", SUM(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_SUM", AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."METERING_HISTORY" WHERE start_time >= DATE_TRUNC(day, CURRENT_DATE()) GROUP BY 1, 2;' > /tmp/snowflake-account-metering.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, AVG(AVG_RUNNING) AS "RUNNING_AVERAGE", AVG(AVG_QUEUED_LOAD) AS "QUEUED_LOAD_AVERAGE", AVG(AVG_QUEUED_PROVISIONING) AS "QUEUED_PROVISIONING_AVERAGE", AVG(AVG_BLOCKED) AS "BLOCKED_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_LOAD_HISTORY" GROUP BY 1;' > /tmp/snowflake-warehouse-load-history-metrics.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, avg(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_AVERAGE", sum(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_SUM", avg(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_AVERAGE", sum(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_SUM", avg(CREDITS_USED) as "CREDITS_USED_AVERAGE", sum(CREDITS_USED) as "CREDITS_USED_SUM" from "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" group by 1;' > /tmp/snowflake-warehouse-metering.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT table_name, table_schema, avg(ACTIVE_BYTES) as "ACTIVE_BYTES_AVERAGE", avg(TIME_TRAVEL_BYTES) as "TIME_TRAVEL_BYTES_AVERAGE", avg(FAILSAFE_BYTES) as "FAILSAFE_BYTES_AVERAGE", avg(RETAINED_FOR_CLONE_BYTES) as "RETAINED_FOR_CLONE_BYTES_AVERAGE" from "SNOWFLAKE"."ACCOUNT_USAGE"."TABLE_STORAGE_METRICS" group by 1, 2;' > /tmp/snowflake-table-storage-metrics.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT STORAGE_BYTES, STAGE_BYTES, FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STORAGE_USAGE" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-storage-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT USAGE_DATE, AVG(AVERAGE_STAGE_BYTES) FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STAGE_STORAGE_USAGE_HISTORY" GROUP BY USAGE_DATE;' > /tmp/snowflake-stage-storage-usage-history.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."REPLICATION_USAGE_HISTORY" GROUP BY DATABASE_NAME;' > /tmp/snowflake-replication-usage-history.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(EXECUTION_TIME) AS "EXECUTION_TIME_AVERAGE", AVG(COMPILATION_TIME) AS "COMPILATION_TIME_AVERAGE", AVG(BYTES_SCANNED) AS "BYTES_SCANNED_AVERAGE", AVG(BYTES_WRITTEN) AS "BYTES_WRITTEN_AVERAGE", AVG(BYTES_DELETED) AS "BYTES_DELETED_AVERAGE", AVG(BYTES_SPILLED_TO_LOCAL_STORAGE) AS "BYTES_SPILLED_TO_LOCAL_STORAGE_AVERAGE", AVG(BYTES_SPILLED_TO_REMOTE_STORAGE) AS "BYTES_SPILLED_TO_REMOTE_STORAGE_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" GROUP BY QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME;' > /tmp/snowflake-query-history.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT PIPE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_INSERTED) AS "BYTES_INSERTED_AVERAGE", SUM(BYTES_INSERTED) AS "BYTES_INSERTED_SUM", AVG(FILES_INSERTED) AS "FILES_INSERTED_AVERAGE", SUM(FILES_INSERTED) AS "FILES_INSERTED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."PIPE_USAGE_HISTORY" GROUP BY PIPE_NAME;' > /tmp/snowflake-pipe-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_ID, QUERY_TEXT, (EXECUTION_TIME / 60000) AS EXEC_TIME, WAREHOUSE_NAME, USER_NAME, EXECUTION_STATUS FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" WHERE EXECUTION_STATUS = '\''SUCCESS'\'' ORDER BY EXECUTION_TIME DESC;' > /tmp/snowflake-longest-queries.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT EVENT_ID, EVENT_TIMESTAMP, EVENT_TYPE, REPORTED_CLIENT_TYPE, REPORTED_CLIENT_VERSION, FIRST_AUTHENTICATION_FACTOR, SECOND_AUTHENTICATION_FACTOR, IS_SUCCESS, ERROR_CODE, ERROR_MESSAGE FROM "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY" WHERE IS_SUCCESS = '\''NO'\'';' > /tmp/snowflake-login-failures.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVERAGE_DATABASE_BYTES, AVERAGE_FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATABASE_STORAGE_USAGE_HISTORY" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-database-storage-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SOURCE_CLOUD, SOURCE_REGION, TARGET_CLOUD, TARGET_REGION, TRANSFER_TYPE, AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATA_TRANSFER_HISTORY" GROUP BY 1, 2, 3, 4, 5;' > /tmp/snowflake-data-transfer-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, SUM(CREDITS_USED) AS TOTAL_CREDITS_USED FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" GROUP BY 1 ORDER BY 2 DESC;' > /tmp/snowflake-credit-usage-by-warehouse.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT TABLE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_AVERAGE", SUM(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_SUM", AVG(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_AVERAGE", SUM(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."AUTOMATIC_CLUSTERING_HISTORY" GROUP BY 1, 2, 3;' > /tmp/snowflake-automatic-clustering.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'select USER_NAME,EVENT_TYPE,IS_SUCCESS,ERROR_CODE,ERROR_MESSAGE,FIRST_AUTHENTICATION_FACTOR,SECOND_AUTHENTICATION_FACTOR from "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY";' > /tmp/snowflake-account-details.json - + # Run the below command as a 1 minute cronjob + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SERVICE_TYPE, NAME, AVG(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_AVERAGE", SUM(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_SUM", AVG(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_AVERAGE", SUM(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_SUM", AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."METERING_HISTORY" WHERE start_time >= DATE_TRUNC(day, CURRENT_DATE()) GROUP BY 1, 2;' > /tmp/snowflake-account-metering.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, AVG(AVG_RUNNING) AS "RUNNING_AVERAGE", AVG(AVG_QUEUED_LOAD) AS "QUEUED_LOAD_AVERAGE", AVG(AVG_QUEUED_PROVISIONING) AS "QUEUED_PROVISIONING_AVERAGE", AVG(AVG_BLOCKED) AS "BLOCKED_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_LOAD_HISTORY" GROUP BY 1;' > /tmp/snowflake-warehouse-load-history-metrics.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, avg(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_AVERAGE", sum(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_SUM", avg(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_AVERAGE", sum(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_SUM", avg(CREDITS_USED) as "CREDITS_USED_AVERAGE", sum(CREDITS_USED) as "CREDITS_USED_SUM" from "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" group by 1;' > /tmp/snowflake-warehouse-metering.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT table_name, table_schema, avg(ACTIVE_BYTES) as "ACTIVE_BYTES_AVERAGE", avg(TIME_TRAVEL_BYTES) as "TIME_TRAVEL_BYTES_AVERAGE", avg(FAILSAFE_BYTES) as "FAILSAFE_BYTES_AVERAGE", avg(RETAINED_FOR_CLONE_BYTES) as "RETAINED_FOR_CLONE_BYTES_AVERAGE" from "SNOWFLAKE"."ACCOUNT_USAGE"."TABLE_STORAGE_METRICS" group by 1, 2;' > /tmp/snowflake-table-storage-metrics.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT STORAGE_BYTES, STAGE_BYTES, FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STORAGE_USAGE" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-storage-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT USAGE_DATE, AVG(AVERAGE_STAGE_BYTES) FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STAGE_STORAGE_USAGE_HISTORY" GROUP BY USAGE_DATE;' > /tmp/snowflake-stage-storage-usage-history.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."REPLICATION_USAGE_HISTORY" GROUP BY DATABASE_NAME;' > /tmp/snowflake-replication-usage-history.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(EXECUTION_TIME) AS "EXECUTION_TIME_AVERAGE", AVG(COMPILATION_TIME) AS "COMPILATION_TIME_AVERAGE", AVG(BYTES_SCANNED) AS "BYTES_SCANNED_AVERAGE", AVG(BYTES_WRITTEN) AS "BYTES_WRITTEN_AVERAGE", AVG(BYTES_DELETED) AS "BYTES_DELETED_AVERAGE", AVG(BYTES_SPILLED_TO_LOCAL_STORAGE) AS "BYTES_SPILLED_TO_LOCAL_STORAGE_AVERAGE", AVG(BYTES_SPILLED_TO_REMOTE_STORAGE) AS "BYTES_SPILLED_TO_REMOTE_STORAGE_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" GROUP BY QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME;' > /tmp/snowflake-query-history.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT PIPE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_INSERTED) AS "BYTES_INSERTED_AVERAGE", SUM(BYTES_INSERTED) AS "BYTES_INSERTED_SUM", AVG(FILES_INSERTED) AS "FILES_INSERTED_AVERAGE", SUM(FILES_INSERTED) AS "FILES_INSERTED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."PIPE_USAGE_HISTORY" GROUP BY PIPE_NAME;' > /tmp/snowflake-pipe-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_ID, QUERY_TEXT, (EXECUTION_TIME / 60000) AS EXEC_TIME, WAREHOUSE_NAME, USER_NAME, EXECUTION_STATUS FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" WHERE EXECUTION_STATUS = '\''SUCCESS'\'' ORDER BY EXECUTION_TIME DESC;' > /tmp/snowflake-longest-queries.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT EVENT_ID, EVENT_TIMESTAMP, EVENT_TYPE, REPORTED_CLIENT_TYPE, REPORTED_CLIENT_VERSION, FIRST_AUTHENTICATION_FACTOR, SECOND_AUTHENTICATION_FACTOR, IS_SUCCESS, ERROR_CODE, ERROR_MESSAGE FROM "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY" WHERE IS_SUCCESS = '\''NO'\'';' > /tmp/snowflake-login-failures.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVERAGE_DATABASE_BYTES, AVERAGE_FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATABASE_STORAGE_USAGE_HISTORY" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-database-storage-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SOURCE_CLOUD, SOURCE_REGION, TARGET_CLOUD, TARGET_REGION, TRANSFER_TYPE, AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATA_TRANSFER_HISTORY" GROUP BY 1, 2, 3, 4, 5;' > /tmp/snowflake-data-transfer-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, SUM(CREDITS_USED) AS TOTAL_CREDITS_USED FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" GROUP BY 1 ORDER BY 2 DESC;' > /tmp/snowflake-credit-usage-by-warehouse.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT TABLE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_AVERAGE", SUM(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_SUM", AVG(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_AVERAGE", SUM(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."AUTOMATIC_CLUSTERING_HISTORY" GROUP BY 1, 2, 3;' > /tmp/snowflake-automatic-clustering.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'select USER_NAME,EVENT_TYPE,IS_SUCCESS,ERROR_CODE,ERROR_MESSAGE,FIRST_AUTHENTICATION_FACTOR,SECOND_AUTHENTICATION_FACTOR from "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY";' > /tmp/snowflake-account-details.json ``` @@ -59,130 +57,126 @@ Nuestra integración con Snowflake le permite recopilar datos completos sobre va 1. Cree un archivo llamado `nri-snowflake-config.yml` en el directorio de integración: ```shell - - touch /etc/newrelic-infra/integrations.d/nri-snowflake-config.yml - + touch /etc/newrelic-infra/integrations.d/nri-snowflake-config.yml ``` 2. Agregue el siguiente fragmento a su archivo `nri-snowflake-config.yml` para permitir que el agente capture datos de Snowflake: ```yml - - --- - integrations: - - name: nri-flex - interval: 30s - config: - name: snowflakeAccountMetering - apis: - - name: snowflakeAccountMetering - file: /tmp/snowflake-account-metering.json - - name: nri-flex - interval: 30s - config: - name: snowflakeWarehouseLoadHistory - apis: - - name: snowflakeWarehouseLoadHistory - file: /tmp/snowflake-warehouse-load-history-metrics.json - - name: nri-flex - interval: 30s - config: - name: snowflakeWarehouseMetering - apis: - - name: snowflakeWarehouseMetering - file: /tmp/snowflake-warehouse-metering.json - - name: nri-flex - interval: 30s - config: - name: snowflakeTableStorage - apis: - - name: snowflakeTableStorage - file: /tmp/snowflake-table-storage-metrics.json - - name: nri-flex - interval: 30s - config: - name: snowflakeStageStorageUsage - apis: - - name: snowflakeStageStorageUsage - file: /tmp/snowflake-stage-storage-usage-history.json - - name: nri-flex - interval: 30s - config: - name: snowflakeReplicationUsgae - apis: - - name: snowflakeReplicationUsgae - file: /tmp/snowflake-replication-usage-history.json - - name: nri-flex - interval: 30s - config: - name: snowflakeQueryHistory - apis: - - name: snowflakeQueryHistory - file: /tmp/snowflake-query-history.json - - name: nri-flex - interval: 30s - config: - name: snowflakePipeUsage - apis: - - name: snowflakePipeUsage - file: /tmp/snowflake-pipe-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeLongestQueries - apis: - - name: snowflakeLongestQueries - file: /tmp/snowflake-longest-queries.json - - name: nri-flex - interval: 30s - config: - name: snowflakeLoginFailure - apis: - - name: snowflakeLoginFailure - file: /tmp/snowflake-login-failures.json - - name: nri-flex - interval: 30s - config: - name: snowflakeDatabaseStorageUsage - apis: - - name: snowflakeDatabaseStorageUsage - file: /tmp/snowflake-database-storage-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeDataTransferUsage - apis: - - name: snowflakeDataTransferUsage - file: /tmp/snowflake-data-transfer-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeCreditUsageByWarehouse - apis: - - name: snowflakeCreditUsageByWarehouse - file: /tmp/snowflake-credit-usage-by-warehouse.json - - name: nri-flex - interval: 30s - config: - name: snowflakeAutomaticClustering - apis: - - name: snowflakeAutomaticClustering - file: /tmp/snowflake-automatic-clustering.json - - name: nri-flex - interval: 30s - config: - name: snowflakeStorageUsage - apis: - - name: snowflakeStorageUsage - file: /tmp/snowflake-storage-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeAccountDetails - apis: - - name: snowflakeAccountDetails - file: /tmp/snowflake-account-details.json - + --- + integrations: + - name: nri-flex + interval: 30s + config: + name: snowflakeAccountMetering + apis: + - name: snowflakeAccountMetering + file: /tmp/snowflake-account-metering.json + - name: nri-flex + interval: 30s + config: + name: snowflakeWarehouseLoadHistory + apis: + - name: snowflakeWarehouseLoadHistory + file: /tmp/snowflake-warehouse-load-history-metrics.json + - name: nri-flex + interval: 30s + config: + name: snowflakeWarehouseMetering + apis: + - name: snowflakeWarehouseMetering + file: /tmp/snowflake-warehouse-metering.json + - name: nri-flex + interval: 30s + config: + name: snowflakeTableStorage + apis: + - name: snowflakeTableStorage + file: /tmp/snowflake-table-storage-metrics.json + - name: nri-flex + interval: 30s + config: + name: snowflakeStageStorageUsage + apis: + - name: snowflakeStageStorageUsage + file: /tmp/snowflake-stage-storage-usage-history.json + - name: nri-flex + interval: 30s + config: + name: snowflakeReplicationUsgae + apis: + - name: snowflakeReplicationUsgae + file: /tmp/snowflake-replication-usage-history.json + - name: nri-flex + interval: 30s + config: + name: snowflakeQueryHistory + apis: + - name: snowflakeQueryHistory + file: /tmp/snowflake-query-history.json + - name: nri-flex + interval: 30s + config: + name: snowflakePipeUsage + apis: + - name: snowflakePipeUsage + file: /tmp/snowflake-pipe-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeLongestQueries + apis: + - name: snowflakeLongestQueries + file: /tmp/snowflake-longest-queries.json + - name: nri-flex + interval: 30s + config: + name: snowflakeLoginFailure + apis: + - name: snowflakeLoginFailure + file: /tmp/snowflake-login-failures.json + - name: nri-flex + interval: 30s + config: + name: snowflakeDatabaseStorageUsage + apis: + - name: snowflakeDatabaseStorageUsage + file: /tmp/snowflake-database-storage-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeDataTransferUsage + apis: + - name: snowflakeDataTransferUsage + file: /tmp/snowflake-data-transfer-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeCreditUsageByWarehouse + apis: + - name: snowflakeCreditUsageByWarehouse + file: /tmp/snowflake-credit-usage-by-warehouse.json + - name: nri-flex + interval: 30s + config: + name: snowflakeAutomaticClustering + apis: + - name: snowflakeAutomaticClustering + file: /tmp/snowflake-automatic-clustering.json + - name: nri-flex + interval: 30s + config: + name: snowflakeStorageUsage + apis: + - name: snowflakeStorageUsage + file: /tmp/snowflake-storage-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeAccountDetails + apis: + - name: snowflakeAccountDetails + file: /tmp/snowflake-account-details.json ``` @@ -192,9 +186,7 @@ Nuestra integración con Snowflake le permite recopilar datos completos sobre va Reinicie su agente de infraestructura. ```shell - sudo systemctl restart newrelic-infra.service - ``` En un par de minutos, tu aplicación se enviará métrica a [one.newrelic.com](https://one.newrelic.com). @@ -215,9 +207,7 @@ Nuestra integración con Snowflake le permite recopilar datos completos sobre va A continuación se muestra una consulta NRQL para comprobar la métrica Snowflake: ```sql - - SELECT * from snowflakeAccountSample - + SELECT * FROM snowflakeAccountSample ``` diff --git a/src/i18n/content/es/docs/logs/forward-logs/azure-log-forwarding.mdx b/src/i18n/content/es/docs/logs/forward-logs/azure-log-forwarding.mdx index 708064be259..52819f18606 100644 --- a/src/i18n/content/es/docs/logs/forward-logs/azure-log-forwarding.mdx +++ b/src/i18n/content/es/docs/logs/forward-logs/azure-log-forwarding.mdx @@ -36,19 +36,80 @@ Para enviar el registro desde su centro de eventos: Sigue estos pasos: 1. Asegúrate de tener un . + 2. Desde **[one.newrelic.com](https://one.newrelic.com/launcher/logger.log-launcher)**, haga clic en **Integrations & Agents** en el menú de navegación izquierdo. + 3. En la categoría **Logging** , haga clic en el mosaico **Microsoft Azure Event Hub** en la lista de fuentes de datos. + 4. Seleccione la cuenta a la que desea enviar el registro y haga clic en **Continue**. + 5. Haga clic en **Generate API key** y copie la clave de API generada. + 6. Haga clic en **Deploy to Azure** y se abrirá una nueva pestaña con la plantilla ARM cargada en Azure. + 7. Seleccione el **Resource group** donde desea crear los recursos necesarios y un **Region**. A pesar de no ser obligatorio, recomendamos instalar la plantilla en un nuevo grupo de recursos, para evitar eliminar alguno de los componentes que crea accidentalmente. + 8. En el campo **New Relic license key** , pegue la clave de API previamente copiada. + 9. Asegúrese de que el [extremo New Relic ](/docs/logs/log-api/introduction-log-api/#endpoint)esté configurado en el correspondiente a su cuenta. -10. Opcional: establezca en `true` los [registros de actividad de suscripción de Azure](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log) que desea reenviar. Consulte [la información de suscripción](#subscription-activity-logs) en este documento para obtener más detalles. -11. Haga clic en **Review + create**, revise los datos que ha insertado y haga clic en **Create**. + +10. Seleccione el modo de escalado. El valor predeterminado es `Basic`. + +11. Opcional: Configure los parámetros de procesamiento por lotes de EventHub (disponible en v2.8.0+) para optimizar el rendimiento: + + * **Tamaño máximo del lote de eventos**: Máximo de eventos por lote (predeterminado: 500, mínimo: 1) + * **Tamaño mínimo del lote de eventos**: Mínimo de eventos por lote (predeterminado: 20, mínimo: 1) + * **Tiempo máximo de espera**: Tiempo máximo de espera para crear un lote en formato HH:MM:SS (predeterminado: 00:00:30) + +12. Opcional: establezca en `true` los [registros de actividad de suscripción de Azure](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log) que desea reenviar. Consulte [la información de suscripción](#subscription-activity-logs) en este documento para obtener más detalles. + +13. Haga clic en **Review + create**, revise los datos que ha insertado y haga clic en **Create**. Tenga en cuenta que la plantilla es idempotente. Puede comenzar a reenviar el registro desde Event Hub y luego volver a ejecutar la misma plantilla para configurar el reenvío [del registro de actividad de la suscripción de Azure](#subscription-activity-logs) completando el paso 10. +### Configure el procesamiento por lotes y el escalado de EventHub (opcional) [#eventhub-configuration] + +A partir de la versión 2.8.0, la plantilla ARM admite opciones de configuración avanzadas de EventHub para optimizar el rendimiento y el rendimiento: + +**Parámetros de procesamiento por lotes del desencadenador de EventHub:** + +Puede configurar el comportamiento de procesamiento por lotes para controlar cómo se procesan los eventos. Esta configuración se configura como la configuración de la aplicación de Azure Function: + +* **Tamaño máximo del lote de eventos** : Número máximo de eventos entregados en un lote a la función (predeterminado: 500, mínimo: 1). Esto controla el límite superior de eventos procesados juntos. + +* **Tamaño mínimo del lote de eventos** : Número mínimo de eventos entregados en un lote a la función (predeterminado: 20, mínimo: 1). La función esperará a acumular al menos esta cantidad de eventos antes de procesarlos, a menos que se alcance el tiempo máximo de espera. + +* **Tiempo máximo de espera** : Tiempo máximo para esperar a crear un lote antes de entregarlo a la función (predeterminado: 00:00:30, formato: HH:MM:SS). Esto garantiza un procesamiento oportuno incluso cuando el volumen de eventos es bajo. + +Estos parámetros ayudan a optimizar el rendimiento y la utilización de recursos en función del volumen de registros y los requisitos de procesamiento. Ajuste estos valores según su caso de uso específico: + +* Aumente los tamaños de los lotes para escenarios de alto volumen para mejorar el rendimiento +* Disminuya los tamaños de los lotes para los requisitos de baja latencia +* Ajuste el tiempo de espera para equilibrar la latencia y la eficiencia del procesamiento por lotes + +**Configuración de escalado (v2.7.0+):** + +La plantilla admite la configuración del modo de escalado de Azure Functions, lo que le permite optimizar los costos y el rendimiento en función de su carga de trabajo: + +* **Modo de escalado básico**: Utiliza un plan basado en el consumo de SKU dinámico (nivel Y1) de forma predeterminada, donde Azure agrega y elimina automáticamente instancias de función en función del número de eventos entrantes. + + * Si la opción `disablePublicAccessToStorageAccount` está habilitada, utiliza un plan de SKU básico (nivel B1) para admitir la integración de VNet. + * Este modo es ideal para cargas de trabajo variables y proporciona una optimización de costos automática con precios de pago por ejecución. + * El espacio de nombres de EventHub incluye 4 particiones con escalado de unidad de rendimiento estándar. + +* **Modo de escalado empresarial**: Proporciona capacidades de escalado avanzadas con recursos informáticos dedicados y más control sobre el escalado de instancias. Este modo ofrece: + + * Funcionalidad de escalado automático tanto para la aplicación de funciones como para EventHub. + * Plan de hospedaje Elastic Premium (EP1) con escalado por sitio habilitado + * EventHub auto-inflate habilitado con un máximo de 40 unidades de rendimiento + * Mayor recuento de particiones (32 particiones frente a 4 en modo básico) para una mejor paralelización + * Rendimiento predecible y menor latencia con instancias precalentadas + * Más adecuado para escenarios de reenvío de registros de misión crítica y de alto volumen + +**Notas importantes:** + +* Al actualizar del modo Básico al modo Enterprise, deberá volver a aprovisionar EventHub debido a la limitación de Azure de que una SKU estándar no puede cambiar los recuentos de particiones después de la creación. + ### Opcional: envíe el registro de actividad de Azure desde su suscripción [#subscription-activity-logs] diff --git a/src/i18n/content/es/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx b/src/i18n/content/es/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx index 76eafc0d6b2..5a5fa96d7f4 100644 --- a/src/i18n/content/es/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx +++ b/src/i18n/content/es/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx @@ -95,7 +95,11 @@ Cuando ocurre un error ANR, Android captura un rastreo del stack. Un rastreo de **Desofuscación:** -Actualmente, New Relic no desofusca automáticamente el rastreo de stack ANR dentro de la plataforma. Se planea brindar soporte para esta función en una versión futura. Mientras tanto, puedes descargar el rastreo del stack ANR ofuscado desde New Relic y luego usar herramientas fuera de línea, como la utilidad `ndk-stack` o `retrace` de Proguard/R8, para simbolizar el rastreo del stack manualmente. +New Relic simboliza automáticamente los marcos de pila de Java en los rastreos de pila de ANR, proporcionando nombres de métodos legibles y números de línea directamente en la plataforma. + + + Los marcos de pila nativos (NDK) no se simbolizan actualmente. Para los marcos de pila nativos, puede descargar el rastreo de pila de New Relic y utilizar herramientas sin conexión como `ndk-stack` para simbolizar manualmente. + ## Deshabilitar el monitoreo ANR [#disable-anr-monitoring] diff --git a/src/i18n/content/es/docs/release-notes/docs-release-notes/docs-12-19-2025.mdx b/src/i18n/content/es/docs/release-notes/docs-release-notes/docs-12-19-2025.mdx new file mode 100644 index 00000000000..6cab0005134 --- /dev/null +++ b/src/i18n/content/es/docs/release-notes/docs-release-notes/docs-12-19-2025.mdx @@ -0,0 +1,79 @@ +--- +subject: Docs +releaseDate: '2025-12-19' +version: 'December 15 - December 19, 2025' +translationType: machine +--- + +### Nuevos documentos + +* Se agregó [User impact](/docs/browser/new-relic-browser/browser-pro-features/user-impact) para proporcionar una guía completa para comprender las señales de frustración y el impacto en el rendimiento en la experiencia del usuario. + +### Cambios importantes + +* Se actualizó el [Catálogo de acciones](/docs/workflow-automation/setup-and-configuration/actions-catalog) con una reestructuración y organización exhaustivas de las acciones del flujo de trabajo. +* Se actualizaron los [logs de Browser: Primeros pasos](/docs/browser/browser-monitoring/browser-pro-features/browser-logs/get-started) con actualizaciones automáticas y manuales de captura de logs. +* Se actualizó [Vistas de página: Examinar el rendimiento de la página](/docs/browser/new-relic-browser/browser-pro-features/page-views-examine-page-performance) con señales de frustración e información sobre el impacto en el rendimiento. +* Se agregó [Referencia de proveedores de datos](/docs/sap-solutions/additional-resources/data-providers-reference) para proporcionar una guía detallada para los proveedores de datos de soluciones SAP. + +### Cambios menores + +* Se agregó documentación de configuración del filtro eBPF a [Instalar eBPF Network Observability en Kubernetes](/docs/ebpf/k8s-installation) y [Instalar eBPF Network Observability en Linux](/docs/ebpf/linux-installation). +* Se actualizó [Agentic AI: Configuración del Protocolo de Contexto del Modelo](/docs/agentic-ai/mcp/setup) con instrucciones de configuración mejoradas. +* Se actualizó [Compatibilidad y requisitos del agente PHP](/docs/apm/agents/php-agent/getting-started/php-agent-compatibility-requirements) con Kinesis Data Streams y Drupal 11.1/11.2. compatibilidad. +* Se actualizó [Compatibilidad y requisitos del agente .NET](/docs/apm/agents/net-agent/getting-started/net-agent-compatibility-requirements) con las últimas versiones compatibles verificadas para las dependencias. +* Se actualizó [Compatibilidad y requisitos del agente Node.js](/docs/apm/agents/nodejs-agent/getting-started/compatibility-requirements-nodejs-agent) con el último informe de compatibilidad. +* Se actualizó [Compatibilidad y requisitos del agente Java](/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent) con la información de compatibilidad actual. +* Se mejoró [Instrumentar la función AWS Lambda con Python](/docs/serverless-function-monitoring/azure-function-monitoring/container) con un comando de instalación explícito para las funciones de Azure en contenedores. +* Se actualizó [Monitoreo del flujo de red](/docs/network-performance-monitoring/setup-performance-monitoring/network-flow-monitoring) con la última versión de Ubuntu compatible con kTranslate. +* Se actualizó [Actualización de Lambda a la experiencia APM](/docs/serverless-function-monitoring/aws-lambda-monitoring/instrument-lambda-function/upgrade-to-apm-experience) para reflejar la nueva compatibilidad con funciones de contenedor. +* Se agregaron publicaciones de Novedades para: + * [Transacción 360](/whats-new/2025/12/whats-new-12-15-transaction-360) + +### Notas de la versión + +* Mantener actualizado con nuestros últimos lanzamientos: + + * [Agente PHP v12.3.0.28](/docs/release-notes/agent-release-notes/php-release-notes/php-agent-12-3-0-28): + + * Se agregó la instrumentación de aws-sdk-php Kinesis Data Streams. + * Se corrigió un problema por el cual el daemon no borraba la caché del paquete al reiniciar. + * Se actualizó la versión de golang a 1.25.5. + + * [Agente Node.js v13.8.1](/docs/release-notes/agent-release-notes/nodejs-release-notes/node-agent-13-8-1): + * Se actualizó la instrumentación de AWS Lambda para omitir el ajuste de la devolución de llamada del controlador si no está presente. + + * [Agente Java v8.25.1](/docs/release-notes/agent-release-notes/java-release-notes/java-agent-8251): + * Se corrigió el error de Kotlin Coroutine sobre la implementación de terceros de `CancellableContinuation`. + + * [Agente de Browser v1.306.0](/docs/release-notes/new-relic-browser-release-notes/browser-agent-release-notes/browser-agent-v1.306.0): + + * Se agregó control para la API de log a través de una bandera RUM separada. + * Se mejoró la validación para responseStart antes de confiar en onTTFB. + * Se eliminó la sintaxis de salto de línea de la salida de webpack. + + * [Integración de Kubernetes v3.51.1](/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-1): + * Lanzado con las versiones de gráficos newrelic-infrastructure-3.56.1 y nri-bundle-6.0.30. + + * [NRDOT v1.7.0](/docs/release-notes/nrdot-release-notes/nrdot-2025-12-15): + * Se agregaron componentes ohi a la distribución nrdot-collector-experimental. + + * [NRDOT v1.6.0](/docs/release-notes/nrdot-release-notes/nrdot-2025-12-12): + + * Se actualizaron las versiones de los componentes otel de v0.135.0 a v0.141.0. + * Se corrigió CVE-2025-61729 actualizando a golang 1.24.11. + * Se solucionó la desaprobación de la configuración de transformprocessor de 0.119.0. + + * [Lanzamiento del administrador de trabajos 493](/docs/release-notes/synthetics-release-notes/job-manager-release-notes/job-manager-release-493): + + * Se corrigió el problema de compatibilidad con Docker 29 causado por la actualización de la versión mínima de la API a 1.44. + * Se agregó el enmascaramiento de datos para información confidencial para cubrir los resultados de trabajos fallidos. + + * [Node Browser Runtime rc1.5](/docs/release-notes/synthetics-release-notes/node-browser-runtime-release-notes/node-browser-runtime-rc1.5): + * Lanzamiento actualizado con los últimos cambios. + + * [Node API Runtime rc1.5](/docs/release-notes/synthetics-release-notes/node-api-runtime-release-notes/node-api-runtime-rc1.5): + * Lanzamiento actualizado con los últimos cambios. + + * [Node Browser Runtime rc1.6](/docs/release-notes/synthetics-release-notes/node-browser-runtime-release-notes/node-browser-runtime-rc1.6): + * Lanzamiento actualizado con los últimos cambios. \ No newline at end of file diff --git a/src/i18n/content/es/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-2.mdx b/src/i18n/content/es/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-2.mdx new file mode 100644 index 00000000000..536e104d4a2 --- /dev/null +++ b/src/i18n/content/es/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-2.mdx @@ -0,0 +1,13 @@ +--- +subject: Kubernetes integration +releaseDate: '2025-12-23' +version: 3.51.2 +translationType: machine +--- + +Para obtener una descripción detallada de los cambios, consulte las [notas de la versión](https://github.com/newrelic/nri-kubernetes/releases/tag/v3.51.2). + +Esta integración está incluida en las siguientes versiones de gráficos: + +* [newrelic-infrastructure-3.56.2](https://github.com/newrelic/nri-kubernetes/releases/tag/newrelic-infrastructure-3.56.2) +* [nri-bundle-6.0.31](https://github.com/newrelic/helm-charts/releases/tag/nri-bundle-6.0.31) \ No newline at end of file diff --git a/src/i18n/content/es/docs/release-notes/nrdot-release-notes/nrdot-2025-12-19.mdx b/src/i18n/content/es/docs/release-notes/nrdot-release-notes/nrdot-2025-12-19.mdx new file mode 100644 index 00000000000..57ae43c30aa --- /dev/null +++ b/src/i18n/content/es/docs/release-notes/nrdot-release-notes/nrdot-2025-12-19.mdx @@ -0,0 +1,17 @@ +--- +subject: NRDOT +releaseDate: '2025-12-19' +version: 1.8.0 +metaDescription: Release notes for NRDOT Collector version 1.8.0 +translationType: machine +--- + +## Log de cambios + +### Característica + +* feat: Actualizar las versiones de los componentes otel de v0.141.0 a v0.142.0 (#464) + +### Corrección de errores + +* fix: forzar expr-lang/expr:1.17.7 para solucionar CVE-2025-68156 (#468) \ No newline at end of file diff --git a/src/i18n/content/es/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx b/src/i18n/content/es/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx index db448f2f8ed..2644be01617 100644 --- a/src/i18n/content/es/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx +++ b/src/i18n/content/es/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx @@ -36,7 +36,7 @@ New Relic recomienda encarecidamente a nuestros clientes que empleen la instrume - diff --git a/src/i18n/content/es/docs/service-architecture-intelligence/github-integrations/github-cloud-integration.mdx b/src/i18n/content/es/docs/service-architecture-intelligence/github-integrations/github-cloud-integration.mdx new file mode 100644 index 00000000000..cca36260d08 --- /dev/null +++ b/src/i18n/content/es/docs/service-architecture-intelligence/github-integrations/github-cloud-integration.mdx @@ -0,0 +1,169 @@ +--- +title: Inteligencia de la arquitectura del servicio con la integración de la nube de GitHub +tags: + - New Relic integrations + - GitHub integration +metaDescription: 'Learn how to integrate GitHub with New Relic to import repositories, teams, and user data for enhanced service architecture intelligence.' +freshnessValidatedDate: never +translationType: machine +--- + +La integración de GitHub mejora la Inteligencia para arquitectura de servicios al enriquecer sus datos New Relic con el contexto de su organización GitHub. Al conectar su cuenta de GitHub, puede importar su repositorio, equipos y datos pull request a New Relic. Esta información adicional fortalece el valor de [Equipos](/docs/service-architecture-intelligence/teams/teams), [Catálogos](/docs/service-architecture-intelligence/catalogs/catalogs) y [Cuadros de Mando](/docs/service-architecture-intelligence/scorecards/getting-started), brindándole una visión más completa y conectada de su trabajo de ingeniería. + +## Antes de que empieces + +**Prerrequisitos:** + +* Debe tener el rol de Administrador de organización o Administrador de dominio de autenticación. + +**Plataforma soportada:** + +* Nube de GitHub +* GitHub Enterprise Cloud (sin residencia de datos) + +**Regiones admitidas:** regiones de EE. UU. y la UE + + + * No se admiten GitHub Enterprise Server ni GitHub Enterprise Cloud con residencia de datos. + * No se admite la instalación de la integración en cuentas de usuario de GitHub. Si bien GitHub permite instalar la aplicación a nivel de usuario, el proceso de sincronización no funcionará y no se importarán datos a New Relic. + * La integración de GitHub no es compatible con FedRAMP. + + +## ¿Qué datos se pueden sincronizar? + +La integración de GitHub le permite elegir de forma selectiva qué tipos de datos importar a New Relic, lo que le brinda control sobre qué información se sincroniza: + +### Tipos de datos disponibles + +* **Repositorio y solicitud de extracción**: importe datos del repositorio y pull request para una mejor visibilidad del código y seguimiento de la implementación + +* **Equipos**: Importa los equipos de GitHub y sus miembros para mejorar la gestión y la propiedad de los equipos en Mapeo. + + + **Conflictos de integración de equipos**: si los equipos ya se integraron en New Relic desde otra fuente (como Okta u otro proveedor de identidad), no se permitirá recuperar ni almacenar los equipos de GitHub para evitar conflictos de datos. En este caso, solo puedes seleccionar el repositorio y los datos pull request.\ + **Requisito de visibilidad del email del usuario**: para garantizar que la membresía del equipo esté alineada con sus equipos de GitHub, el usuario de GitHub deberá configurar sus direcciones de email como públicas en la configuración de su perfil de GitHub. Los miembros del equipo con configuración de email privado serán excluidos del proceso de sincronización de datos de usuario. + + +## Configurar la integración de GitHub + +1. Vaya a **[one.newrelic.com > + Integration & Agents > GitHub integration](https://one.newrelic.com/marketplace/install-data-source?state=9306060d-b674-b245-083e-ff8d42765e0d)**. + +2. Seleccione la cuenta en la que desea configurar la integración. + +3. Seleccione **Set up a new integration** y haga clic en **Continue**. + +4. En la pantalla **Begin integration** : + + a. Haga clic en **Get started in GitHub** para conectar su cuenta. La aplicación de observabilidad New Relic se abre en GitHub Marketplace. + + b. Complete la instalación de la aplicación dentro de su organización de GitHub. Luego de la instalación, será redirigido nuevamente a la interfaz de New Relic. + + c. Seleccione **Begin integration** nuevamente y haga clic en **Continue**. + + d. **Select your data preferences**: Elija qué tipos de datos desea sincronizar: + + * **Teams + Users**: Importa estructuras de equipos de GitHub e información de usuarios. + * **Repositories + Pull Requests**: Importa datos del repositorio y pull request. + * **Both**: Importa todos los tipos de datos disponibles. + + mi. Si seleccionó **Teams + Users**, se mostrará una lista de todos los equipos de GitHub. Seleccione todos los equipos o una selección de ellos para importar. + + f. Haga clic en **Start first sync** para comenzar a importar los datos seleccionados. + + gramo. Luego de ver el mensaje **Sync started** , haga clic en **Continue**. La pantalla **Integration status** mostrará el recuento de los tipos de datos seleccionados (equipos, repositorio, etc.), actualizar cada 5 segundos. Espere unos minutos para que se complete la importación de todos los datos. + + GitHub integration + +5. *(Opcional)* En la pantalla **GitHub integration**, puedes acceder a tus datos importados: + + * Haga clic en **Go to Teams** para ver los equipos importados en la página [Teams](/docs/service-architecture-intelligence/teams/teams) (si se seleccionaron equipos durante la configuración). + * Haga clic en **Go to Repositories** para ver la información del repositorio importada en el catálogo [Repositories](/docs/service-architecture-intelligence/repositories/repositories) (si se seleccionó el repositorio durante la configuración). + +## Gestiona tu integración de GitHub + +Luego de configurar su integración de GitHub, puede gestionarla a través de la interfaz de New Relic. Esto incluye actualizar datos, editar la configuración y desinstalar cuando sea necesario. + +### Gestión de integración de acceso + +1. Vaya a **[one.newrelic.com > + Integration & Agents > GitHub integration](https://one.newrelic.com/marketplace/install-data-source?state=9306060d-b674-b245-083e-ff8d42765e0d)**. + +2. En el paso **Select an action** , seleccione **Manage your organization** y haga clic en **Continue**. + + Screenshot showing the manage organization option in GitHub integration + +La pantalla **Manage GitHub integration** muestra su organización conectada con su estado de sincronización actual y tipos de datos. + +### Actualizar datos + +La opción Actualizar datos proporciona una forma optimizada de actualizar sus datos de GitHub en New Relic. + +**Para actualizar los datos:** + +1. Desde la pantalla **Manage GitHub integration** , ubique su organización. + +2. Haga clic en **Refresh data** junto a la organización que desea actualizar y luego haga clic en **Continue**. + +3. En el paso **Refresh Data** , haga clic en **Sync on demand**. + +Luego, el sistema validará sus licencias de GitHub y el acceso a la organización, obtendrá solo datos nuevos o modificados desde la última sincronización, procesará y mapeará los datos actualizados según los tipos de datos seleccionados y actualizará el estado de integración para reflejar la última timestamp de sincronización y los recuentos de datos. + +**¿Qué se actualiza?** + +* Equipos y sus miembros +* cambios de repositorio (repositorio nuevo, repositorio archivado, cambios de licencias) +* Propiedad del equipo actualizada a través de propiedades personalizadas + + + **Frecuencia de actualización**: puede actualizar los datos con tanta frecuencia como sea necesario. El proceso normalmente demora unos minutos dependiendo del tamaño de su organización y los tipos de datos seleccionados. + + +### Editar la configuración de integración + +Emplee la opción **Edit** para modificar su configuración de integración luego de la configuración inicial. Puede ajustar qué tipos de datos se sincronizan entre GitHub y New Relic, así como la selección de qué equipos están sincronizados. + +**Para editar la integración de GitHub:** + +1. Desde la pantalla **Manage GitHub integration** , ubique su organización. + +2. Haga clic en **Edit** junto a la organización que desea actualizar y luego haga clic en **Continue**. + +3. En el paso **Edit Integration Settings**, ajuste sus selecciones según sea necesario. + +4. Haga clic en **Save changes** para aplicar las actualizaciones. + +**¿Qué sucede durante la edición?** + +* Los datos actuales permanecen intactos durante los cambios de configuración. Si tu selección de equipos para sincronizar ahora es diferente, la selección anterior no se eliminará de New Relic, pero quedará sin sincronización con GitHub. Puedes eliminar estos equipos en la función Equipos. +* Las nuevas configuraciones se aplican a las sincronizaciones posteriores +* Puede obtener una vista previa de los cambios antes de aplicarlos +* La integración continúa ejecutar con la configuración anterior hasta que almacene los cambios + +### Configurar la propiedad automática del equipo + +Puede asignar automáticamente el repositorio de GitHub a sus equipos agregando `teamOwningRepo` como una propiedad personalizada en GitHub. + +Cree la propiedad personalizada en el nivel de organización y asigne un valor a la propiedad personalizada en el nivel de repositorio. Además, puede configurar una propiedad personalizada para varios repositorios a nivel de organización simultáneamente. + +Luego, en New Relic Teams, habilite la característica de propiedad automatizada, cerciorar de usar `team` como clave de etiqueta. + +Una vez configurado esto, asociaremos automáticamente cada repositorio con su equipo correcto. + +Para obtener más información sobre la creación de propiedades personalizadas, consulte la [documentación de GitHub](https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization). + +### Desinstalar la integración de GitHub + +Al desinstalar la integración de GitHub se detiene la sincronización de datos de la organización seleccionada. Se le dará la opción de conservar o eliminar los datos previamente importados dentro de New Relic. + +**Para desinstalar:** + +1. Desde la pantalla **Manage GitHub integration** , ubique la organización que desea desinstalar y haga clic en **Uninstall**. + +2. En el cuadro de diálogo de confirmación, seleccione si desea Conservar los datos o Eliminarlos. + +3. Revise los detalles y haga clic en Desinstalar organización para confirmar. + +4. Verá un mensaje de éxito confirmando la desinstalación. + + + **Retención de datos luego de la desinstalación**: los datos que se conservan ya no se sincronizarán con GitHub y se pueden eliminar manualmente más adelante dentro de la plataforma New Relic (por ejemplo, a través de la capacidad de Teams). + \ No newline at end of file diff --git a/src/i18n/content/es/docs/service-architecture-intelligence/github-integrations/github-enterprise-integration.mdx b/src/i18n/content/es/docs/service-architecture-intelligence/github-integrations/github-enterprise-integration.mdx new file mode 100644 index 00000000000..bb420222abe --- /dev/null +++ b/src/i18n/content/es/docs/service-architecture-intelligence/github-integrations/github-enterprise-integration.mdx @@ -0,0 +1,567 @@ +--- +title: Inteligencia de la arquitectura del servicio con GitHub Enterprise (on-premises) +tags: + - New Relic integrations + - GitHub Enterprise integration +metaDescription: Integrate your on-premise GitHub Enterprise (GHE) environment with New Relic using a secure collector service and GitHub App for automated data ingestion. +freshnessValidatedDate: never +translationType: machine +--- + + + Todavía estamos trabajando en esta característica, ¡pero nos encantaría que la probaras! + + Esta característica se proporciona actualmente como parte de un programa de vista previa de conformidad con nuestras [políticas de prelanzamiento](/docs/licenses/license-information/referenced-policies/new-relic-pre-release-policy). + + +¿Busca obtener información más profunda sobre la arquitectura de su servicio aprovechando los datos de su cuenta de GitHub Enterprise local? La integración de New Relic GitHub Enterprise importa repositorios y equipos directamente a la plataforma New Relic utilizando un servicio de recopilación seguro implementado dentro de su red privada. + +Con la nueva función de obtención selectiva de datos, puede elegir exactamente qué tipos de datos importar, ya sean equipos, repositorios y solicitudes de extracción, o ambos. Esta integración tiene como objetivo mejorar la gestión y visibilidad de [Equipos](/docs/service-architecture-intelligence/teams/teams), [Catálogos](/docs/service-architecture-intelligence/catalogs/catalogs) y [Cuadro de mandos](/docs/service-architecture-intelligence/scorecards/getting-started) dentro de New Relic. Para obtener más información, consulte la [capacidad de Inteligencia de Arquitectura de Servicios](/docs/service-architecture-intelligence/getting-started). + +**Requisitos previos** + +* Cuenta de GitHub Enterprise on-premises con privilegios de administrador de la organización. +* Entorno Docker para ejecutar el servicio de recopilación dentro de su red de GitHub Enterprise. +* Cuenta de New Relic con los permisos apropiados para crear integraciones. + +## Consideraciones de Seguridad + +Esta integración sigue las mejores prácticas de seguridad: + +* Utiliza la autenticación de la aplicación de GitHub con permisos mínimos requeridos +* Los eventos de webhook se autentican mediante claves secretas +* Toda la transmisión de datos se realiza a través de HTTPS +* No se almacenan ni se transmiten credenciales de usuario +* Solo se importan los datos de repositorios y equipos. + +**Para configurar la integración de GitHub Enterprise:** + + + + ## Cree y configure una aplicación de GitHub + + En su instancia de GHE, navegue a **Settings → Developer Settings → GitHub Apps → New GitHub App**. Para obtener instrucciones detalladas sobre cómo crear una aplicación de GitHub, consulte la [documentación de GitHub sobre el registro de una aplicación de GitHub](https://docs.github.com/en/apps/creating-github-apps/registering-a-github-app/registering-a-github-app). + + ### Configurar permisos + + Configure los permisos de la aplicación con precisión para garantizar una obtención de datos sin problemas durante la sincronización inicial y una escucha eficiente de los eventos de webhook a partir de entonces. Los permisos de la aplicación definen el alcance del acceso que la aplicación tiene a varios recursos de repositorio y organización en GitHub. Al adaptar estos permisos, puede mejorar la seguridad, asegurando que la aplicación solo acceda a los datos necesarios, minimizando la exposición. La configuración adecuada facilita la sincronización inicial de datos sin problemas y el manejo confiable de eventos, optimizando la integración de la aplicación con el ecosistema de GitHub. + + Para obtener orientación detallada sobre los permisos de la aplicación de GitHub, consulte la [documentación de GitHub sobre la configuración de permisos para las aplicaciones de GitHub](https://docs.github.com/en/apps/creating-github-apps/setting-up-a-github-app/choosing-permissions-for-a-github-app). + + #### Permisos de repositorio requeridos + + Configure los siguientes permisos a nivel de repositorio exactamente como se muestra para habilitar la sincronización de datos: + + * **Administración**: Solo lectura ✓ + * **Verificaciones**: Solo lectura ✓ + * **Estados de confirmación**: Seleccionado ✓ + * **Contenido**: Seleccionado ✓ + * **Propiedades personalizadas**: Seleccionado ✓ + * **Implementaciones**: Solo lectura ✓ + * **Metadatos**: Solo lectura (obligatorio) ✓ + * **Solicitudes de extracción**: Seleccionado ✓ + * **Webhooks**: Solo lectura ✓ + + #### Permisos de organización requeridos + + Configure los siguientes permisos a nivel de organización exactamente como se muestra: + + * **Administración**: Solo lectura ✓ + * **Roles de organización personalizados**: Solo lectura ✓ + * **Propiedades personalizadas**: Solo lectura ✓ + * **Roles de repositorio personalizados**: Solo lectura ✓ + * **Eventos**: Solo lectura ✓ + * **Miembros**: Solo lectura ✓ + * **Webhooks**: Solo lectura ✓ + + #### Suscripciones a eventos de webhook + + Seleccione los siguientes eventos de webhook exactamente como se muestran para la sincronización y el monitoreo en tiempo real: + + **✓ Seleccione estos eventos:** + + * `check_run` - Actualizaciones del estado de la ejecución de la verificación + * `check_suite` - Finalización del conjunto de pruebas + * `commit_comment` - Comentarios sobre las confirmaciones + * `create` - Creación de rama o etiqueta + * `custom_property` - Cambios de propiedad personalizada para asignaciones de equipo + * `custom_property_values` - Cambios en los valores de las propiedades personalizadas + * `delete` - Eliminación de rama o etiqueta + * `deployment` - Actividades de implementación + * `deployment_review` - Procesos de revisión de implementación + * `deployment_status` - Actualizaciones de estado de implementación + * `fork` - Eventos de bifurcación de repositorio + * `installation_target` - Cambios en la instalación de la aplicación de GitHub + * `label` - Cambios de etiqueta en problemas y solicitudes de extracción + * `member` - Cambios en el perfil de los miembros + * `membership` - Adiciones y eliminaciones de miembros + * `meta` - Cambios de metadatos de la aplicación GitHub + * `milestone` - Cambios de hitos + * `organization` - Cambios a nivel de la organización + * `public` - Cambios de visibilidad del repositorio + * `pull_request` - Actividades de solicitud de extracción + * `pull_request_review` - Actividades de revisión de solicitudes de extracción + * `pull_request_review_comment` - Actividades de comentarios de revisión + * `pull_request_review_thread` - Actividades de hilo de revisión de solicitudes de extracción + * `push` - Inserciones y confirmaciones de código + * `release` - Publicar versiones y actualizaciones + * `repository` - Creación, eliminación y modificaciones de repositorios + * `star` - Eventos de estrella del repositorio + * `status` - Actualizaciones del estado de confirmación + * `team` - Creación y modificaciones de equipos + * `team_add` - Adiciones de miembros del equipo + * `watch` - Eventos de seguimiento del repositorio + + + **Mejor práctica de seguridad**: Para reducir la exposición a la seguridad, siga el principio de acceso de privilegio mínimo y solo habilite los permisos mínimos necesarios para las necesidades de su integración. + + + ### Configurar webhooks + + Configure la URL del webhook y cree un secreto de evento personalizado para una comunicación segura: + + * **URL del webhook**: Utilice el siguiente formato según la implementación de su servicio de recopilación: + + * Para HTTP: `http://your-domain-name/github/sync/webhook` + * Para HTTPS: `https://your-domain-name/github/sync/webhook` + + **Ejemplo**: Si su servicio de recopilación se implementa en `collector.yourcompany.com`, la URL del webhook sería: `https://collector.yourcompany.com/github/sync/webhook` + + * **Secreto del evento**: Genere una cadena aleatoria segura (más de 32 caracteres) para la autenticación del webhook. Guarde este valor, ya que lo necesitará para la variable de entorno `GITHUB_APP_WEBHOOK_SECRET`. + + ### Generar y convertir claves + + 1. Después de crear la aplicación de GitHub, debe generar una clave privada. En la configuración de su aplicación de GitHub, haga clic en **Generate a private key**. La aplicación generará y descargará automáticamente un ID de aplicación único y un archivo de clave privada (formato .pem). Guárdelos de forma segura, ya que serán necesarios para la configuración del servicio de recopilación. + + 2. Convierta su archivo de clave privada descargado al formato DER y luego codifíquelo en Base64: + + **Paso 1: Convertir .pem a formato DER** + + ```bash + openssl rsa -outform der -in private-key.pem -out output.der + ``` + + **Paso 2: Codifique el archivo DER en Base64** + + ```bash + # For Linux/macOS + base64 -i output.der -o outputBase64 + cat outputBase64 # Copy this output + + # For Windows (using PowerShell) + [Convert]::ToBase64String([IO.File]::ReadAllBytes("output.der")) + + # Alternative for Windows (using certutil) + certutil -encode output.der temp.b64 && findstr /v /c:- temp.b64 + ``` + + Copie la cadena Base64 resultante y utilícela como valor para la variable de entorno `GITHUB_APP_PRIVATE_KEY` en la configuración de su recopilador. + + **✓ Indicadores de éxito:** + + * La aplicación de Github se crea correctamente + * El ID de la aplicación y la clave privada se guardan de forma segura + * La URL del webhook está configurada y es accesible + + + + ## Prepare las variables de entorno + + Antes de implementar el servicio de recopilación, recopile la siguiente información: + + ### Variables de entorno requeridas + +
+ Solution
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ Variable + + Fuente + + Cómo obtener +
+ `NR_API_KEY` + + New Relic + + Genere una clave API desde el panel de New Relic. +
+ `NR_LICENSE_KEY` + + New Relic + + Genere una clave de licencia desde el panel de New Relic. +
+ `GHE_BASE_URL` + + Servidor GHE + + La URL base para su servidor GHE (por ejemplo, + + `https://source.datanot.us` + + ). +
+ `GITHUB_APP_ID` + + Aplicación de GitHub + + El ID de la aplicación único generado cuando creó la aplicación de GitHub. +
+ `GITHUB_APP_PRIVATE_KEY` + + Aplicación de GitHub + + El contenido del archivo de clave privada ( + + `.pem` + + ), convertido a una cadena Base64. Consulte el paso 1 para obtener instrucciones de conversión. +
+ `GITHUB_APP_WEBHOOK_SECRET` + + Aplicación de GitHub + + El valor personalizado del Secreto del evento que estableció al crear la aplicación de GitHub. +
+ + ### Variables de entorno SSL opcionales + + Las siguientes son variables de entorno opcionales para hacer HTTPS de la API. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ Variable opcional + + Fuente + + Cómo obtener +
+ `SERVER_SSL_KEY_STORE` + + Configuración SSL + + Ruta al archivo del almacén de claves SSL para la configuración HTTPS. Consulte las instrucciones de configuración del certificado SSL a continuación. +
+ `SERVER_SSL_KEY_STORE_PASSWORD` + + Configuración SSL + + Contraseña para el archivo de almacén de claves SSL. Esta es la contraseña que estableció al crear el almacén de claves PKCS12. +
+ `SERVER_SSL_KEY_STORE_TYPE` + + Configuración SSL + + Tipo de almacén de claves SSL (por ejemplo, PKCS12, JKS). Utilice PKCS12 al seguir las instrucciones de configuración de SSL a continuación. +
+ `SERVER_SSL_KEY_ALIAS` + + Configuración SSL + + Alias ​​para la clave SSL dentro del almacén de claves. Este es el nombre que especifica al crear el almacén de claves. +
+ `SERVER_PORT` + + Configuración SSL + + Puerto del servidor para la comunicación HTTPS. Use 8443 para HTTPS. +
+ + ### Instrucciones de configuración del certificado SSL + + Para obtener un certificado SSL de una Autoridad de Certificación (CA) confiable para la configuración HTTPS, siga estos pasos: + + 1. **Generar una clave privada y una solicitud de firma de certificado (CSR)**: + + ```bash + openssl req -new -newkey rsa:2048 -nodes -keyout mycert.key -out mycert.csr + ``` + + 2. **Envíe el CSR a la CA que elija**: Envíe el archivo `mycert.csr` a la Autoridad de certificación (CA) que elija (por ejemplo, DigiCert, Let's Encrypt, GoDaddy). + + 3. **Complete la validación del dominio**: Complete cualquier paso de validación de dominio requerido según las instrucciones de la CA. + + 4. **Descargue el certificado**: Descargue los archivos de certificado emitidos de la CA (comúnmente un archivo `.crt` o `.pem`). + + 5. **Crear un almacén de claves PKCS12**: Combine el certificado y la clave privada en un almacén de claves PKCS12: + + ```bash + openssl pkcs12 -export -in mycert.crt -inkey mycert.key -out keystore.p12 -name mycert + ``` + + 6. **Usar el almacén de claves**: Use el archivo `keystore.p12` generado como valor para `SERVER_SSL_KEY_STORE` en su configuración de Docker. + + + + ## Implementar el servicio de recopilación + + El servicio de recopilación se entrega como una imagen de Docker. La implementación se puede realizar de dos maneras: + + ### Opción A: Usando Docker Compose (recomendado) + + Cree un archivo Docker Compose que automatice la descarga e implementación del servicio. + + 1. Cree un archivo `docker-compose.yml` con el siguiente contenido: + + ```yaml + version: '3.9' + + services: + nr-ghe-collector: + image: newrelic/nr-ghe-collector:tag # use latest tag available in dockerhub starting with v* + container_name: nr-ghe-collector + restart: unless-stopped + ports: + - "8080:8080" # HTTP port, make 8443 in case of HTTPS + environment: + # Required environment variables + - NR_API_KEY=${NR_API_KEY:-DEFAULT_VALUE} + - NR_LICENSE_KEY=${NR_LICENSE_KEY:-DEFAULT_VALUE} + - GHE_BASE_URL=${GHE_BASE_URL:-DEFAULT_VALUE} + - GITHUB_APP_ID=${GITHUB_APP_ID:-DEFAULT_VALUE} + - GITHUB_APP_PRIVATE_KEY=${GITHUB_APP_PRIVATE_KEY:-DEFAULT_VALUE} + - GITHUB_APP_WEBHOOK_SECRET=${GITHUB_APP_WEBHOOK_SECRET:-DEFAULT_VALUE} + + # Optional SSL environment variables (uncomment and configure if using HTTPS) + # - SERVER_SSL_KEY_STORE=${SERVER_SSL_KEY_STORE} + # - SERVER_SSL_KEY_STORE_PASSWORD=${SERVER_SSL_KEY_STORE_PASSWORD} + # - SERVER_SSL_KEY_STORE_TYPE=${SERVER_SSL_KEY_STORE_TYPE} + # - SERVER_SSL_KEY_ALIAS=${SERVER_SSL_KEY_ALIAS} + # - SERVER_PORT=8443 + #volumes: # Uncomment the line below if using SSL keystore + # - ./keystore.p12:/app/keystore.p12 # path to your keystore file + network_mode: bridge + + networks: + nr-network: + driver: bridge + ``` + + 2. Establezca sus variables de entorno reemplazando los marcadores de posición `DEFAULT_VALUE` en el archivo Docker Compose con sus valores reales, o cree variables de entorno en su sistema antes de ejecutar el comando. + + + Nunca confirme archivos de entorno que contengan secretos en el control de versiones. Utilice prácticas seguras de gestión de secretos en producción. + + + 3. Ejecute el siguiente comando para iniciar el servicio: + + ```bash + docker-compose up -d + ``` + + ### Opción B: Ejecución directa de la imagen de Docker + + Puede descargar la imagen de Docker directamente desde nuestro [registro de Docker Hub](https://hub.docker.com/r/newrelic/nr-ghe-collector) y ejecutarla utilizando la canalización CI/CD o el método de implementación preferido de su organización. Tenga en cuenta que el cliente debe pasar todas las variables de entorno enumeradas anteriormente al iniciar el servicio de recopilación. + + **✓ Indicadores de éxito:** + + * El servicio de recopilación se está ejecutando y es accesible en el puerto configurado + * Los logs del contenedor Docker muestran un inicio exitoso sin errores + * El servicio responde a las comprobaciones de estado (si está configurado) + + + + ## Instale la aplicación de GitHub en las organizaciones + + Después de que el servicio de recopilación se esté ejecutando, debe instalar la aplicación de GitHub en las organizaciones específicas que desea integrar: + + 1. Navegue a su instancia de GitHub Enterprise. + 2. Vaya a **Settings** → **Developer Settings** → **GitHub Apps**. + 3. Busque la aplicación de GitHub que creó en el paso 1 y haga clic en ella. + 4. En la barra lateral izquierda, haga clic en **Install App**. + 5. Seleccione las organizaciones donde desea instalar la aplicación. + 6. Elija si desea instalar en todos los repositorios o seleccionar repositorios específicos. + 7. Haga clic en **Install** para completar la instalación. + + **✓ Indicadores de éxito:** + + * Las entregas de webhook aparecen en la configuración de la aplicación de GitHub + * No hay errores de autenticación en los logs del servicio de recopilación + + + + ## Complete la configuración de la integración en la interfaz de usuario de New Relic + + Una vez que el servicio de recopilación se está ejecutando y la aplicación de GitHub está instalada en su(s) organización(es) GHE, complete la configuración de integración como se indica en la interfaz de usuario de New Relic: + + 1. Las organizaciones GHE correspondientes aparecerán en la interfaz de usuario de New Relic. + + 2. Para iniciar la sincronización inicial de datos, haga clic en **First time sync**. + + 3. *(Opcional)* Haga clic en **On-demand sync** para sincronizar los datos manualmente. + + + Puede sincronizar manualmente los datos una vez cada 4 horas. El botón de **On-demand sync** permanece deshabilitado si la sincronización se realizó dentro de las 4 horas anteriores. + + + 4. Después de ver el mensaje de Inicio de sincronización, haga clic en **Continue**. La pantalla **GitHub Enterprise Integration** muestra el recuento de equipos y repositorios, actualizándose cada 5 segundos. Permita de 15 a 30 minutos para la importación completa de todos los datos (el tiempo depende del recuento de repositorios). + + GitHub Enterprise Integration dashboard showing integration progress + + ### Visualización de sus datos + + En la pantalla de **GitHub Enterprise Integration**: + + * Para ver la información de los equipos importados en [Teams](/docs/service-architecture-intelligence/teams/teams), haga clic en **Go to Teams**. + * Para ver la información de los repositorios importados en [Catalogs](/docs/service-architecture-intelligence/catalogs/catalogs), haga clic en **Go to Repositories**. + + + + ## Configurar asignaciones de equipo (opcional) + + Puede asignar automáticamente repositorios de GitHub a sus equipos agregando `teamOwningRepo` como una propiedad personalizada en GitHub Enterprise. + + 1. Cree la propiedad personalizada en el nivel de organización y asigne un valor a la propiedad personalizada en el nivel de repositorio. Además, puede configurar una propiedad personalizada para varios repositorios a nivel de organización simultáneamente. + 2. Luego, en New Relic Teams, habilite la función [Automated Ownership](/docs/service-architecture-intelligence/teams/manage-teams/#assign-ownership), cerciorar de usar `team` como clave de etiqueta. + + Una vez que esto está configurado, New Relic coincide automáticamente cada repositorio con su equipo correcto. + + Para obtener más información sobre la creación de propiedades personalizadas, consulte la [documentación de GitHub](https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization). + + + +## Resolución de problemas + +### Problemas y soluciones comunes + +**Fallos en la entrega del webhook:** + +* Verifique que el servicio de recopilación se esté ejecutando y sea accesible desde GitHub Enterprise +* Compruebe la configuración del cortafuegos y la conectividad de la red + +**Errores de autenticación:** + +* Verifique que el ID de la aplicación de GitHub y la clave privada estén configurados correctamente +* Asegúrese de que la clave privada esté correctamente convertida al formato DER y codificada en Base64 +* Verifique que el secreto del webhook coincida entre la aplicación de GitHub y la configuración del recopilador + +**Fallos de sincronización:** + +* Verifique que la aplicación de GitHub tenga los permisos requeridos +* Compruebe que la aplicación esté instalada en las organizaciones correctas +* Revise los logs del servicio de recopilación para obtener mensajes de error específicos + +**Problemas de conectividad de red:** + +* Asegúrese de que el servicio de recopilación pueda acceder a su instancia de GitHub Enterprise +* Verifique que los certificados SSL estén configurados correctamente si usa HTTPS +* Compruebe la resolución de DNS para su dominio de GitHub Enterprise + +## Desinstalación + +Para desinstalar la integración de GitHub Enterprise: + +1. Navegue a la interfaz de usuario de GitHub Enterprise. +2. Vaya a la configuración de la organización donde está instalada la aplicación. +3. Desinstale la aplicación de GitHub directamente desde la interfaz de GitHub Enterprise. Esta acción activará el proceso de backend para dejar de recopilar datos. +4. Detenga y elimine el servicio de recopilación de su entorno Docker. \ No newline at end of file diff --git a/src/i18n/content/fr/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent.mdx b/src/i18n/content/fr/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent.mdx index 17663915726..6d11e8379c2 100644 --- a/src/i18n/content/fr/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent.mdx +++ b/src/i18n/content/fr/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent.mdx @@ -141,7 +141,7 @@ L'agent instrumente automatiquement ces framework et bibliothèque : * Spray 1.3.1 vers la dernière version * Tomcat 7.0.0 à la version la plus récente * Undertow 1.1.0.Final à la dernière version - * WebLogic 12.1.2.1 à 12.2.x (exclusif) + * WebLogic 12.1.2.1 à 14.1.1 * WebSphere 8 à 9 (exclusif) * WebSphere Liberty 8.5 à la dernière version * Wildfly 8.0.0.Final (dernière version) diff --git a/src/i18n/content/fr/docs/cci/azure-cci.mdx b/src/i18n/content/fr/docs/cci/azure-cci.mdx index 0dea0d32725..83849a86de1 100644 --- a/src/i18n/content/fr/docs/cci/azure-cci.mdx +++ b/src/i18n/content/fr/docs/cci/azure-cci.mdx @@ -395,7 +395,7 @@ Avant de connecter Azure à Intelligence Coûts du cloud, assurez-vous d'av - Saisissez le chemin de base où les données de facturation sont stockées dans le conteneur (par exemple, `20251001-20251031` pour octobre 2025). **Remarque**: si votre exportation de facturation est publiée directement à la racine du conteneur, laissez ce champ vide. + Entrez le chemin relatif dans le conteneur à partir duquel vous voyez les baisses de données de facturation au format mensuel (par exemple, `20251101-20251130` pour novembre 2025, ou `20251201-20251231` pour décembre 2025). **Remarque**: si votre exportation de facturation publie directement à la racine du conteneur, laissez ce champ vide. diff --git a/src/i18n/content/fr/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx b/src/i18n/content/fr/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx index df3d998096a..d4849df82da 100644 --- a/src/i18n/content/fr/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx +++ b/src/i18n/content/fr/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx @@ -26,28 +26,26 @@ Notre intégration Snowflake vous permet de collecter des données complètes su ## Configurer les métriques Snowflake - Exécutez la commande ci-dessous pour stocker les métriques Snowflake au format JSON, permettant à nri-flex de les lire. Assurez-vous de modifier le COMPTE, le NOM D'UTILISATEUR et le SNOWSQL\_PWD en conséquence. + Exécutez la commande ci-dessous pour stocker les métriques Snowflake au format JSON, ce qui permet à nri-flex de les lire. Veillez à modifier `ACCOUNT`, `USERNAME` et `SNOWSQL_PWD` en conséquence. ```shell - - # Run the below command as a 1 minute cronjob - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SERVICE_TYPE, NAME, AVG(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_AVERAGE", SUM(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_SUM", AVG(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_AVERAGE", SUM(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_SUM", AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."METERING_HISTORY" WHERE start_time >= DATE_TRUNC(day, CURRENT_DATE()) GROUP BY 1, 2;' > /tmp/snowflake-account-metering.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, AVG(AVG_RUNNING) AS "RUNNING_AVERAGE", AVG(AVG_QUEUED_LOAD) AS "QUEUED_LOAD_AVERAGE", AVG(AVG_QUEUED_PROVISIONING) AS "QUEUED_PROVISIONING_AVERAGE", AVG(AVG_BLOCKED) AS "BLOCKED_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_LOAD_HISTORY" GROUP BY 1;' > /tmp/snowflake-warehouse-load-history-metrics.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, avg(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_AVERAGE", sum(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_SUM", avg(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_AVERAGE", sum(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_SUM", avg(CREDITS_USED) as "CREDITS_USED_AVERAGE", sum(CREDITS_USED) as "CREDITS_USED_SUM" from "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" group by 1;' > /tmp/snowflake-warehouse-metering.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT table_name, table_schema, avg(ACTIVE_BYTES) as "ACTIVE_BYTES_AVERAGE", avg(TIME_TRAVEL_BYTES) as "TIME_TRAVEL_BYTES_AVERAGE", avg(FAILSAFE_BYTES) as "FAILSAFE_BYTES_AVERAGE", avg(RETAINED_FOR_CLONE_BYTES) as "RETAINED_FOR_CLONE_BYTES_AVERAGE" from "SNOWFLAKE"."ACCOUNT_USAGE"."TABLE_STORAGE_METRICS" group by 1, 2;' > /tmp/snowflake-table-storage-metrics.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT STORAGE_BYTES, STAGE_BYTES, FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STORAGE_USAGE" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-storage-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT USAGE_DATE, AVG(AVERAGE_STAGE_BYTES) FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STAGE_STORAGE_USAGE_HISTORY" GROUP BY USAGE_DATE;' > /tmp/snowflake-stage-storage-usage-history.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."REPLICATION_USAGE_HISTORY" GROUP BY DATABASE_NAME;' > /tmp/snowflake-replication-usage-history.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(EXECUTION_TIME) AS "EXECUTION_TIME_AVERAGE", AVG(COMPILATION_TIME) AS "COMPILATION_TIME_AVERAGE", AVG(BYTES_SCANNED) AS "BYTES_SCANNED_AVERAGE", AVG(BYTES_WRITTEN) AS "BYTES_WRITTEN_AVERAGE", AVG(BYTES_DELETED) AS "BYTES_DELETED_AVERAGE", AVG(BYTES_SPILLED_TO_LOCAL_STORAGE) AS "BYTES_SPILLED_TO_LOCAL_STORAGE_AVERAGE", AVG(BYTES_SPILLED_TO_REMOTE_STORAGE) AS "BYTES_SPILLED_TO_REMOTE_STORAGE_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" GROUP BY QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME;' > /tmp/snowflake-query-history.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT PIPE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_INSERTED) AS "BYTES_INSERTED_AVERAGE", SUM(BYTES_INSERTED) AS "BYTES_INSERTED_SUM", AVG(FILES_INSERTED) AS "FILES_INSERTED_AVERAGE", SUM(FILES_INSERTED) AS "FILES_INSERTED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."PIPE_USAGE_HISTORY" GROUP BY PIPE_NAME;' > /tmp/snowflake-pipe-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_ID, QUERY_TEXT, (EXECUTION_TIME / 60000) AS EXEC_TIME, WAREHOUSE_NAME, USER_NAME, EXECUTION_STATUS FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" WHERE EXECUTION_STATUS = '\''SUCCESS'\'' ORDER BY EXECUTION_TIME DESC;' > /tmp/snowflake-longest-queries.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT EVENT_ID, EVENT_TIMESTAMP, EVENT_TYPE, REPORTED_CLIENT_TYPE, REPORTED_CLIENT_VERSION, FIRST_AUTHENTICATION_FACTOR, SECOND_AUTHENTICATION_FACTOR, IS_SUCCESS, ERROR_CODE, ERROR_MESSAGE FROM "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY" WHERE IS_SUCCESS = '\''NO'\'';' > /tmp/snowflake-login-failures.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVERAGE_DATABASE_BYTES, AVERAGE_FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATABASE_STORAGE_USAGE_HISTORY" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-database-storage-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SOURCE_CLOUD, SOURCE_REGION, TARGET_CLOUD, TARGET_REGION, TRANSFER_TYPE, AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATA_TRANSFER_HISTORY" GROUP BY 1, 2, 3, 4, 5;' > /tmp/snowflake-data-transfer-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, SUM(CREDITS_USED) AS TOTAL_CREDITS_USED FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" GROUP BY 1 ORDER BY 2 DESC;' > /tmp/snowflake-credit-usage-by-warehouse.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT TABLE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_AVERAGE", SUM(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_SUM", AVG(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_AVERAGE", SUM(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."AUTOMATIC_CLUSTERING_HISTORY" GROUP BY 1, 2, 3;' > /tmp/snowflake-automatic-clustering.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'select USER_NAME,EVENT_TYPE,IS_SUCCESS,ERROR_CODE,ERROR_MESSAGE,FIRST_AUTHENTICATION_FACTOR,SECOND_AUTHENTICATION_FACTOR from "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY";' > /tmp/snowflake-account-details.json - + # Run the below command as a 1 minute cronjob + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SERVICE_TYPE, NAME, AVG(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_AVERAGE", SUM(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_SUM", AVG(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_AVERAGE", SUM(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_SUM", AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."METERING_HISTORY" WHERE start_time >= DATE_TRUNC(day, CURRENT_DATE()) GROUP BY 1, 2;' > /tmp/snowflake-account-metering.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, AVG(AVG_RUNNING) AS "RUNNING_AVERAGE", AVG(AVG_QUEUED_LOAD) AS "QUEUED_LOAD_AVERAGE", AVG(AVG_QUEUED_PROVISIONING) AS "QUEUED_PROVISIONING_AVERAGE", AVG(AVG_BLOCKED) AS "BLOCKED_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_LOAD_HISTORY" GROUP BY 1;' > /tmp/snowflake-warehouse-load-history-metrics.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, avg(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_AVERAGE", sum(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_SUM", avg(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_AVERAGE", sum(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_SUM", avg(CREDITS_USED) as "CREDITS_USED_AVERAGE", sum(CREDITS_USED) as "CREDITS_USED_SUM" from "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" group by 1;' > /tmp/snowflake-warehouse-metering.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT table_name, table_schema, avg(ACTIVE_BYTES) as "ACTIVE_BYTES_AVERAGE", avg(TIME_TRAVEL_BYTES) as "TIME_TRAVEL_BYTES_AVERAGE", avg(FAILSAFE_BYTES) as "FAILSAFE_BYTES_AVERAGE", avg(RETAINED_FOR_CLONE_BYTES) as "RETAINED_FOR_CLONE_BYTES_AVERAGE" from "SNOWFLAKE"."ACCOUNT_USAGE"."TABLE_STORAGE_METRICS" group by 1, 2;' > /tmp/snowflake-table-storage-metrics.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT STORAGE_BYTES, STAGE_BYTES, FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STORAGE_USAGE" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-storage-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT USAGE_DATE, AVG(AVERAGE_STAGE_BYTES) FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STAGE_STORAGE_USAGE_HISTORY" GROUP BY USAGE_DATE;' > /tmp/snowflake-stage-storage-usage-history.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."REPLICATION_USAGE_HISTORY" GROUP BY DATABASE_NAME;' > /tmp/snowflake-replication-usage-history.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(EXECUTION_TIME) AS "EXECUTION_TIME_AVERAGE", AVG(COMPILATION_TIME) AS "COMPILATION_TIME_AVERAGE", AVG(BYTES_SCANNED) AS "BYTES_SCANNED_AVERAGE", AVG(BYTES_WRITTEN) AS "BYTES_WRITTEN_AVERAGE", AVG(BYTES_DELETED) AS "BYTES_DELETED_AVERAGE", AVG(BYTES_SPILLED_TO_LOCAL_STORAGE) AS "BYTES_SPILLED_TO_LOCAL_STORAGE_AVERAGE", AVG(BYTES_SPILLED_TO_REMOTE_STORAGE) AS "BYTES_SPILLED_TO_REMOTE_STORAGE_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" GROUP BY QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME;' > /tmp/snowflake-query-history.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT PIPE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_INSERTED) AS "BYTES_INSERTED_AVERAGE", SUM(BYTES_INSERTED) AS "BYTES_INSERTED_SUM", AVG(FILES_INSERTED) AS "FILES_INSERTED_AVERAGE", SUM(FILES_INSERTED) AS "FILES_INSERTED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."PIPE_USAGE_HISTORY" GROUP BY PIPE_NAME;' > /tmp/snowflake-pipe-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_ID, QUERY_TEXT, (EXECUTION_TIME / 60000) AS EXEC_TIME, WAREHOUSE_NAME, USER_NAME, EXECUTION_STATUS FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" WHERE EXECUTION_STATUS = '\''SUCCESS'\'' ORDER BY EXECUTION_TIME DESC;' > /tmp/snowflake-longest-queries.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT EVENT_ID, EVENT_TIMESTAMP, EVENT_TYPE, REPORTED_CLIENT_TYPE, REPORTED_CLIENT_VERSION, FIRST_AUTHENTICATION_FACTOR, SECOND_AUTHENTICATION_FACTOR, IS_SUCCESS, ERROR_CODE, ERROR_MESSAGE FROM "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY" WHERE IS_SUCCESS = '\''NO'\'';' > /tmp/snowflake-login-failures.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVERAGE_DATABASE_BYTES, AVERAGE_FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATABASE_STORAGE_USAGE_HISTORY" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-database-storage-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SOURCE_CLOUD, SOURCE_REGION, TARGET_CLOUD, TARGET_REGION, TRANSFER_TYPE, AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATA_TRANSFER_HISTORY" GROUP BY 1, 2, 3, 4, 5;' > /tmp/snowflake-data-transfer-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, SUM(CREDITS_USED) AS TOTAL_CREDITS_USED FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" GROUP BY 1 ORDER BY 2 DESC;' > /tmp/snowflake-credit-usage-by-warehouse.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT TABLE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_AVERAGE", SUM(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_SUM", AVG(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_AVERAGE", SUM(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."AUTOMATIC_CLUSTERING_HISTORY" GROUP BY 1, 2, 3;' > /tmp/snowflake-automatic-clustering.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'select USER_NAME,EVENT_TYPE,IS_SUCCESS,ERROR_CODE,ERROR_MESSAGE,FIRST_AUTHENTICATION_FACTOR,SECOND_AUTHENTICATION_FACTOR from "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY";' > /tmp/snowflake-account-details.json ``` @@ -59,130 +57,126 @@ Notre intégration Snowflake vous permet de collecter des données complètes su 1. Créez un fichier nommé `nri-snowflake-config.yml` dans le répertoire d'intégration : ```shell - - touch /etc/newrelic-infra/integrations.d/nri-snowflake-config.yml - + touch /etc/newrelic-infra/integrations.d/nri-snowflake-config.yml ``` 2. Ajoutez le snippet suivant à votre fichier `nri-snowflake-config.yml` pour permettre à l'agent de capturer les données Snowflake : ```yml - - --- - integrations: - - name: nri-flex - interval: 30s - config: - name: snowflakeAccountMetering - apis: - - name: snowflakeAccountMetering - file: /tmp/snowflake-account-metering.json - - name: nri-flex - interval: 30s - config: - name: snowflakeWarehouseLoadHistory - apis: - - name: snowflakeWarehouseLoadHistory - file: /tmp/snowflake-warehouse-load-history-metrics.json - - name: nri-flex - interval: 30s - config: - name: snowflakeWarehouseMetering - apis: - - name: snowflakeWarehouseMetering - file: /tmp/snowflake-warehouse-metering.json - - name: nri-flex - interval: 30s - config: - name: snowflakeTableStorage - apis: - - name: snowflakeTableStorage - file: /tmp/snowflake-table-storage-metrics.json - - name: nri-flex - interval: 30s - config: - name: snowflakeStageStorageUsage - apis: - - name: snowflakeStageStorageUsage - file: /tmp/snowflake-stage-storage-usage-history.json - - name: nri-flex - interval: 30s - config: - name: snowflakeReplicationUsgae - apis: - - name: snowflakeReplicationUsgae - file: /tmp/snowflake-replication-usage-history.json - - name: nri-flex - interval: 30s - config: - name: snowflakeQueryHistory - apis: - - name: snowflakeQueryHistory - file: /tmp/snowflake-query-history.json - - name: nri-flex - interval: 30s - config: - name: snowflakePipeUsage - apis: - - name: snowflakePipeUsage - file: /tmp/snowflake-pipe-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeLongestQueries - apis: - - name: snowflakeLongestQueries - file: /tmp/snowflake-longest-queries.json - - name: nri-flex - interval: 30s - config: - name: snowflakeLoginFailure - apis: - - name: snowflakeLoginFailure - file: /tmp/snowflake-login-failures.json - - name: nri-flex - interval: 30s - config: - name: snowflakeDatabaseStorageUsage - apis: - - name: snowflakeDatabaseStorageUsage - file: /tmp/snowflake-database-storage-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeDataTransferUsage - apis: - - name: snowflakeDataTransferUsage - file: /tmp/snowflake-data-transfer-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeCreditUsageByWarehouse - apis: - - name: snowflakeCreditUsageByWarehouse - file: /tmp/snowflake-credit-usage-by-warehouse.json - - name: nri-flex - interval: 30s - config: - name: snowflakeAutomaticClustering - apis: - - name: snowflakeAutomaticClustering - file: /tmp/snowflake-automatic-clustering.json - - name: nri-flex - interval: 30s - config: - name: snowflakeStorageUsage - apis: - - name: snowflakeStorageUsage - file: /tmp/snowflake-storage-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeAccountDetails - apis: - - name: snowflakeAccountDetails - file: /tmp/snowflake-account-details.json - + --- + integrations: + - name: nri-flex + interval: 30s + config: + name: snowflakeAccountMetering + apis: + - name: snowflakeAccountMetering + file: /tmp/snowflake-account-metering.json + - name: nri-flex + interval: 30s + config: + name: snowflakeWarehouseLoadHistory + apis: + - name: snowflakeWarehouseLoadHistory + file: /tmp/snowflake-warehouse-load-history-metrics.json + - name: nri-flex + interval: 30s + config: + name: snowflakeWarehouseMetering + apis: + - name: snowflakeWarehouseMetering + file: /tmp/snowflake-warehouse-metering.json + - name: nri-flex + interval: 30s + config: + name: snowflakeTableStorage + apis: + - name: snowflakeTableStorage + file: /tmp/snowflake-table-storage-metrics.json + - name: nri-flex + interval: 30s + config: + name: snowflakeStageStorageUsage + apis: + - name: snowflakeStageStorageUsage + file: /tmp/snowflake-stage-storage-usage-history.json + - name: nri-flex + interval: 30s + config: + name: snowflakeReplicationUsgae + apis: + - name: snowflakeReplicationUsgae + file: /tmp/snowflake-replication-usage-history.json + - name: nri-flex + interval: 30s + config: + name: snowflakeQueryHistory + apis: + - name: snowflakeQueryHistory + file: /tmp/snowflake-query-history.json + - name: nri-flex + interval: 30s + config: + name: snowflakePipeUsage + apis: + - name: snowflakePipeUsage + file: /tmp/snowflake-pipe-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeLongestQueries + apis: + - name: snowflakeLongestQueries + file: /tmp/snowflake-longest-queries.json + - name: nri-flex + interval: 30s + config: + name: snowflakeLoginFailure + apis: + - name: snowflakeLoginFailure + file: /tmp/snowflake-login-failures.json + - name: nri-flex + interval: 30s + config: + name: snowflakeDatabaseStorageUsage + apis: + - name: snowflakeDatabaseStorageUsage + file: /tmp/snowflake-database-storage-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeDataTransferUsage + apis: + - name: snowflakeDataTransferUsage + file: /tmp/snowflake-data-transfer-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeCreditUsageByWarehouse + apis: + - name: snowflakeCreditUsageByWarehouse + file: /tmp/snowflake-credit-usage-by-warehouse.json + - name: nri-flex + interval: 30s + config: + name: snowflakeAutomaticClustering + apis: + - name: snowflakeAutomaticClustering + file: /tmp/snowflake-automatic-clustering.json + - name: nri-flex + interval: 30s + config: + name: snowflakeStorageUsage + apis: + - name: snowflakeStorageUsage + file: /tmp/snowflake-storage-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeAccountDetails + apis: + - name: snowflakeAccountDetails + file: /tmp/snowflake-account-details.json ``` @@ -192,9 +186,7 @@ Notre intégration Snowflake vous permet de collecter des données complètes su Redémarrez votre agent d’infrastructure. ```shell - sudo systemctl restart newrelic-infra.service - ``` Dans quelques minutes, votre application enverra des métriques à [one.newrelic.com](https://one.newrelic.com). @@ -215,9 +207,7 @@ Notre intégration Snowflake vous permet de collecter des données complètes su Voici une requête NRQL pour vérifier les métriques Snowflake : ```sql - - SELECT * from snowflakeAccountSample - + SELECT * FROM snowflakeAccountSample ``` diff --git a/src/i18n/content/fr/docs/logs/forward-logs/azure-log-forwarding.mdx b/src/i18n/content/fr/docs/logs/forward-logs/azure-log-forwarding.mdx index af72ff28f44..001504c50ab 100644 --- a/src/i18n/content/fr/docs/logs/forward-logs/azure-log-forwarding.mdx +++ b/src/i18n/content/fr/docs/logs/forward-logs/azure-log-forwarding.mdx @@ -36,19 +36,80 @@ Pour envoyer le log depuis votre événement Hub : Suivez ces étapes : 1. Assurez-vous d'avoir un . + 2. Depuis **[one.newrelic.com](https://one.newrelic.com/launcher/logger.log-launcher)**, cliquez sur **Integrations & Agents** dans la navigation de gauche. + 3. Dans la catégorie **Logging** , cliquez sur la tuile **Microsoft Azure Event Hub** dans la liste des sources de données. + 4. Sélectionnez le compte vers lequel vous souhaitez envoyer le log et cliquez sur **Continue**. + 5. Cliquez sur **Generate API key** et copiez la clé API générée. + 6. Cliquez sur **Deploy to Azure** et un nouvel onglet s’ouvrira avec le modèle ARM chargé dans Azure. + 7. Sélectionnez le **Resource group** où vous souhaitez créer les ressources nécessaires, ainsi qu'un **Region**. Bien que cela ne soit pas obligatoire, nous vous recommandons d'installer le modèle dans un nouveau groupe de ressources, pour éviter de supprimer accidentellement l'un des composants qu'il crée. + 8. Dans le champ **New Relic license key** , collez la clé API précédemment copiée. + 9. Assurez-vous que le [point de terminaison New Relic](/docs/logs/log-api/introduction-log-api/#endpoint) est défini sur celui correspondant à votre compte. -10. Facultatif : définissez sur `true` les [logs d’activité de l’abonnement Azure](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log) que vous souhaitez transférer. Consultez [les informations d’abonnement](#subscription-activity-logs) dans ce document pour plus de détails. -11. Cliquez sur **Review + create**, vérifiez les données que vous avez insérées et cliquez sur **Create**. + +10. Sélectionnez le mode de mise à l'échelle. La valeur par défaut est `Basic`. + +11. Facultatif : configurez les paramètres de traitement par lots EventHub (disponibles dans la version 2.8.0+) pour optimiser les performances : + + * **Taille maximale du lot d'événements**: nombre maximal d'événements par lot (par défaut : 500, minimum : 1) + * **Taille minimale du lot d'événements**: nombre minimal d'événements par lot (par défaut : 20, minimum : 1) + * **Durée d'attente maximale**: durée d'attente maximale pour créer un lot au format HH:MM:SS (par défaut : 00:00:30) + +12. Facultatif : définissez sur `true` les [logs d’activité de l’abonnement Azure](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log) que vous souhaitez transférer. Consultez [les informations d’abonnement](#subscription-activity-logs) dans ce document pour plus de détails. + +13. Cliquez sur **Review + create**, vérifiez les données que vous avez insérées et cliquez sur **Create**. Notez que le modèle est idempotent. Vous pouvez démarrer le transfert du log à partir d’Événement Hub, puis réexécuter le même modèle pour configurer le transfert des [Azure Subscription Activity Logs](#subscription-activity-logs) en effectuant l’étape 10. +### Configurez le traitement par lots et la mise à l'échelle EventHub (facultatif) [#eventhub-configuration] + +À partir de la version 2.8.0, le modèle ARM prend en charge les options de configuration EventHub avancées pour optimiser les performances et le débit : + +**Paramètres de traitement par lots du déclencheur EventHub :** + +Vous pouvez configurer le comportement de traitement par lots pour contrôler la façon dont les événements sont traités. Ces paramètres sont configurés en tant que paramètres d'application de fonction Azure : + +* **Taille maximale du lot d'événements** : nombre maximal d'événements livrés dans un lot à la fonction (par défaut : 500, minimum : 1). Cela contrôle la limite supérieure des événements traités ensemble. + +* **Taille minimale du lot d'événements** : nombre minimal d'événements livrés dans un lot à la fonction (par défaut : 20, minimum : 1). La fonction attendra d'accumuler au moins autant d'événements avant de les traiter, sauf si la durée d'attente maximale est atteinte. + +* **Durée d'attente maximale** : durée maximale d'attente pour créer un lot avant de le livrer à la fonction (par défaut : 00:00:30, format : HH:MM:SS). Cela garantit un traitement rapide, même lorsque le volume d'événements est faible. + +Ces paramètres permettent d'optimiser le débit et l'utilisation des ressources en fonction de votre volume de journaux et de vos exigences de traitement. Ajustez ces valeurs en fonction de votre cas d'utilisation spécifique : + +* Augmentez la taille des lots pour les scénarios à volume élevé afin d'améliorer le débit +* Diminuez la taille des lots pour les exigences de faible latence +* Ajustez le temps d'attente pour équilibrer la latence et l'efficacité du traitement par lots + +**Configuration de la mise à l'échelle (v2.7.0+) :** + +Le modèle prend en charge la configuration du mode de mise à l'échelle des fonctions Azure, ce qui vous permet d'optimiser les coûts et les performances en fonction de votre charge de travail : + +* **Mode de mise à l'échelle de base**: utilise par défaut un plan basé sur la consommation avec référence SKU dynamique (niveau Y1), où Azure ajoute et supprime automatiquement des instances de fonction en fonction du nombre d'événements entrants. + + * Si l'option `disablePublicAccessToStorageAccount` est activée, il utilise un plan avec référence SKU de base (niveau B1) pour prendre en charge l'intégration VNet. + * Ce mode est idéal pour les charges de travail variables et offre une optimisation automatique des coûts avec une tarification au paiement. + * L'espace de noms EventHub comprend 4 partitions avec une mise à l'échelle des unités de débit standard. + +* **Mode de mise à l'échelle Entreprise**: offre des capacités de mise à l'échelle avancées avec des ressources de calcul dédiées et plus de contrôle sur la mise à l'échelle des instances. Ce mode offre : + + * Fonctionnalité de mise à l'échelle automatique pour l'application de fonction et EventHub. + * Plan d'hébergement Elastic Premium (EP1) avec mise à l'échelle par site activée + * Gonflement automatique EventHub activé avec un maximum de 40 unités de débit + * Nombre de partitions accru (32 partitions contre 4 en mode de base) pour une meilleure parallélisation + * Performances prévisibles et latence plus faible avec des instances préchauffées + * Mieux adapté aux scénarios de transfert de journaux critiques et à volume élevé + +**Remarques importantes :** + +* Lors de la mise à niveau du mode de base vers le mode Entreprise, vous devrez reprovisionner EventHub en raison de la limitation d'Azure selon laquelle une référence SKU standard ne peut pas modifier le nombre de partitions après la création. + ### Facultatif : envoyer le log d'activité Azure à partir de votre abonnement [#subscription-activity-logs] diff --git a/src/i18n/content/fr/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx b/src/i18n/content/fr/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx index 92843dd9006..be5c475782e 100644 --- a/src/i18n/content/fr/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx +++ b/src/i18n/content/fr/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx @@ -95,7 +95,11 @@ Lorsqu'une erreur ANR se produit, Android capture une trace du stack. Une t **Désobfuscation :** -New Relic ne désobfusque actuellement pas automatiquement les traces du stack ANR au sein de la plateforme. La prise en charge de cette fonctionnalité est prévue pour une sortie future. En attendant, vous pouvez télécharger la trace du stack ANR obscurcie depuis New Relic, puis utiliser des outils hors ligne, tels que l'utilitaire `ndk-stack` ou `retrace` de Proguard/R8, pour symboliser manuellement la trace du stack. +New Relic symbolise automatiquement les trames de pile Java dans les traces de pile ANR, fournissant des noms de méthode et des numéros de ligne lisibles directement dans la plateforme. + + + Les trames de pile natives (NDK) ne sont pas actuellement symbolisées. Pour les trames de pile natives, vous pouvez télécharger la trace de pile depuis New Relic et utiliser des outils hors ligne tels que `ndk-stack` pour symboliser manuellement. + ## Désactiver monitoringANR [#disable-anr-monitoring] diff --git a/src/i18n/content/fr/docs/release-notes/docs-release-notes/docs-12-19-2025.mdx b/src/i18n/content/fr/docs/release-notes/docs-release-notes/docs-12-19-2025.mdx new file mode 100644 index 00000000000..0a4106d1bdf --- /dev/null +++ b/src/i18n/content/fr/docs/release-notes/docs-release-notes/docs-12-19-2025.mdx @@ -0,0 +1,79 @@ +--- +subject: Docs +releaseDate: '2025-12-19' +version: 'December 15 - December 19, 2025' +translationType: machine +--- + +### Nouveaux documents + +* Ajout d'une section [« Impact sur l'utilisateur »](/docs/browser/new-relic-browser/browser-pro-features/user-impact) afin de fournir des conseils complets pour comprendre les signaux de frustration et l'impact des performances sur l'expérience utilisateur. + +### Changements majeurs + +* [Catalogue des actions](/docs/workflow-automation/setup-and-configuration/actions-catalog) mis à jour avec une restructuration et une organisation approfondies des actions de workflow. +* [Logs Browser mis à jour : Commencez](/docs/browser/browser-monitoring/browser-pro-features/browser-logs/get-started) avec les mises à jour automatiques et manuelles de la capture log. +* [Pages vues mises à jour : Analysez les performances de la page](/docs/browser/new-relic-browser/browser-pro-features/page-views-examine-page-performance) en tenant compte des signaux de frustration et des informations sur l’impact sur les performances. +* Ajout [d'une référence aux fournisseurs de données](/docs/sap-solutions/additional-resources/data-providers-reference) afin de fournir des instructions détaillées aux fournisseurs de données des solutions SAP. + +### Modifications mineures + +* Ajout de la documentation configuration du filtre eBPF aux [guides d'installation de l'observabilité réseau eBPF sur Kubernetes](/docs/ebpf/k8s-installation) et [sur Linux](/docs/ebpf/linux-installation). +* [Configuration du protocole de contexte Agentic modèle d'IA](/docs/agentic-ai/mcp/setup) mise à jour avec des instructions de configuration améliorées. +* [Compatibilité PHP mise à jour de l'agent et des exigences](/docs/apm/agents/php-agent/getting-started/php-agent-compatibility-requirements) avec Kinesis Data Streams et Drupal 11.1/11.2 compatibilité. +* Mise à jour de [la compatibilité .NET de l'agent et des exigences](/docs/apm/agents/net-agent/getting-started/net-agent-compatibility-requirements) avec les dernières versions compatibles vérifiées pour la dépendance. +* Mise à jour [de l'agent de compatibilité Node.js et des exigences](/docs/apm/agents/nodejs-agent/getting-started/compatibility-requirements-nodejs-agent) avec le dernier rapport de compatibilité. +* Mise à jour [de l'agent de compatibilité Java et des exigences](/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent) avec les informations de compatibilité actuelles. +* Amélioration [de l'instrumentation de la fonction AWS Lambda avec Python](/docs/serverless-function-monitoring/azure-function-monitoring/container) grâce à une commande d'installation explicite pour les fonctions Azure conteneurisées. +* [Monitoring du flux réseau](/docs/network-performance-monitoring/setup-performance-monitoring/network-flow-monitoring) mise à jour avec la prise en charge de kTranslate pour la dernière version d'Ubuntu. +* [Mise à jour de Lambda vers l'expérience APM](/docs/serverless-function-monitoring/aws-lambda-monitoring/instrument-lambda-function/upgrade-to-apm-experience) pour refléter la nouvelle prise en charge des fonctions de conteneur. +* Ajout de billets Nouveautés pour : + * [Transaction 360](/whats-new/2025/12/whats-new-12-15-transaction-360) + +### Notes de version + +* Restez au courant de nos dernières sorties : + + * [Agent PHP v12.3.0.28](/docs/release-notes/agent-release-notes/php-release-notes/php-agent-12-3-0-28): + + * Ajout de l'instrumentation Kinesis Data Streams pour aws-sdk-php. + * Correction d'un problème où le daemon ne vidait pas le cache package au redémarrage. + * Version de Go mise à jour à 1.25.5. + + * [Agent Node.js v13.8.1](/docs/release-notes/agent-release-notes/nodejs-release-notes/node-agent-13-8-1): + * Instrumentation AWS Lambda mise à jour pour éviter d'encapsuler le rappel du gestionnaire s'il est absent. + + * [Agent Java v8.25.1](/docs/release-notes/agent-release-notes/java-release-notes/java-agent-8251): + * Correction d'une erreur de coroutine Kotlin concernant l'implémentation tierce de `CancellableContinuation`. + + * [Agent Browser v1.306.0](/docs/release-notes/new-relic-browser-release-notes/browser-agent-release-notes/browser-agent-v1.306.0): + + * Ajout d'un contrôle pour l'API log via un indicateur RUM distinct. + * Validation améliorée de responseStart avant de s'appuyer sur onTTFB. + * Suppression de la syntaxe de saut de ligne dans la sortie webpack. + + * [Intégration Kubernetes v3.51.1](/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-1): + * sortie avec les versions graphiques newrelic-infrastructure-3.56.1 et nri-bundle-6.0.30. + + * [NRDOT v1.7.0](/docs/release-notes/nrdot-release-notes/nrdot-2025-12-15): + * Ajout de composants ohi à la distribution nrdot-collector-experimental. + + * [NRDOT v1.6.0](/docs/release-notes/nrdot-release-notes/nrdot-2025-12-12): + + * Mise à jour des versions des composants hôteliers de v0.135.0 à v0.141.0. + * Correction de la vulnérabilité CVE-2025-61729 par la mise à jour vers golang 1.24.11. + * Correction de la dépréciation de la configuration du processeur de transformation de la version 0.119.0. + + * [Job Manager sortie 493](/docs/release-notes/synthetics-release-notes/job-manager-release-notes/job-manager-release-493): + + * Correction d'un problème de compatibilité avec Docker 29 causé par la mise à jour de la version minimale de l'API à 1.44. + * Ajout d'un masquage des données sensibles pour couvrir les résultats des tâches ayant échoué. + + * [Environnement d'exécution Browser Node rc1.5](/docs/release-notes/synthetics-release-notes/node-browser-runtime-release-notes/node-browser-runtime-rc1.5): + * Sortie mise à jour avec les dernières modifications. + + * [Node API Runtime rc1.5](/docs/release-notes/synthetics-release-notes/node-api-runtime-release-notes/node-api-runtime-rc1.5): + * Sortie mise à jour avec les dernières modifications. + + * [Environnement d'exécution Browser Node rc1.6](/docs/release-notes/synthetics-release-notes/node-browser-runtime-release-notes/node-browser-runtime-rc1.6): + * Sortie mise à jour avec les dernières modifications. \ No newline at end of file diff --git a/src/i18n/content/fr/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-2.mdx b/src/i18n/content/fr/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-2.mdx new file mode 100644 index 00000000000..40e9ddf9cd1 --- /dev/null +++ b/src/i18n/content/fr/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-2.mdx @@ -0,0 +1,13 @@ +--- +subject: Kubernetes integration +releaseDate: '2025-12-23' +version: 3.51.2 +translationType: machine +--- + +Pour une description détaillée des modifications, consultez les [notes de version](https://github.com/newrelic/nri-kubernetes/releases/tag/v3.51.2). + +Cette intégration est incluse dans les versions de graphiques suivantes : + +* [newrelic-infrastructure-3.56.2](https://github.com/newrelic/nri-kubernetes/releases/tag/newrelic-infrastructure-3.56.2) +* [nri-bundle-6.0.31](https://github.com/newrelic/helm-charts/releases/tag/nri-bundle-6.0.31) \ No newline at end of file diff --git a/src/i18n/content/fr/docs/release-notes/nrdot-release-notes/nrdot-2025-12-19.mdx b/src/i18n/content/fr/docs/release-notes/nrdot-release-notes/nrdot-2025-12-19.mdx new file mode 100644 index 00000000000..122a1e773f8 --- /dev/null +++ b/src/i18n/content/fr/docs/release-notes/nrdot-release-notes/nrdot-2025-12-19.mdx @@ -0,0 +1,17 @@ +--- +subject: NRDOT +releaseDate: '2025-12-19' +version: 1.8.0 +metaDescription: Release notes for NRDOT Collector version 1.8.0 +translationType: machine +--- + +## Log des modifications + +### Caractéristiques + +* feat: Mise à jour des versions des composants otel de v0.141.0 à v0.142.0 (#464) + +### Débogage + +* fix : forcer expr-lang/expr :1.17.7 pour corriger CVE-2025-68156 (#468) \ No newline at end of file diff --git a/src/i18n/content/fr/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx b/src/i18n/content/fr/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx index de485c9498e..35a781ae95e 100644 --- a/src/i18n/content/fr/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx +++ b/src/i18n/content/fr/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx @@ -36,7 +36,7 @@ New Relic conseille vivement à ses clients qui utilisent l’instrumentation de - diff --git a/src/i18n/content/fr/docs/service-architecture-intelligence/github-integrations/github-cloud-integration.mdx b/src/i18n/content/fr/docs/service-architecture-intelligence/github-integrations/github-cloud-integration.mdx new file mode 100644 index 00000000000..d05b4614cde --- /dev/null +++ b/src/i18n/content/fr/docs/service-architecture-intelligence/github-integrations/github-cloud-integration.mdx @@ -0,0 +1,169 @@ +--- +title: Intelligence de l'architecture des services avec l'intégration cloud GitHub +tags: + - New Relic integrations + - GitHub integration +metaDescription: 'Learn how to integrate GitHub with New Relic to import repositories, teams, and user data for enhanced service architecture intelligence.' +freshnessValidatedDate: never +translationType: machine +--- + +L'intégration GitHub améliore les services d'architecture intelligente en enrichissant vos données New Relic avec le contexte de votre organisation GitHub. En connectant votre compte GitHub, vous pouvez importer vos données de référentiel, d'équipes et de demande de tirage dans New Relic. Ces informations supplémentaires renforcent la valeur des [équipes](/docs/service-architecture-intelligence/teams/teams), [des catalogues](/docs/service-architecture-intelligence/catalogs/catalogs) et [des tableaux de bord](/docs/service-architecture-intelligence/scorecards/getting-started), vous offrant une vue plus complète et connectée de votre travail d'ingénierie. + +## Avant de commencer + +**Prérequis :** + +* Vous devez posséder le rôle de Gestionnaire d'organisation ou de Gestionnaire de domaine d'authentification. + +**Plateformes supportées :** + +* GitHub Cloud +* GitHub Enterprise Cloud (sans résidence des données) + +**Régions prises en charge :** États-Unis et UE + + + * GitHub Enterprise Server et GitHub Enterprise Cloud avec résidence des données ne sont pas pris en charge. + * L'installation de l'intégration dans les comptes utilisateurs GitHub n'est pas prise en charge. Bien que GitHub permette l'installation de l'application au niveau utilisateur, le processus de synchronisation ne fonctionnera pas et aucune donnée ne sera importée dans New Relic. + * L'intégration GitHub n'est pas conforme à FedRAMP. + + +## Quelles données peuvent être synchronisées + +L'intégration GitHub vous permet de choisir de manière sélective les types de données à importer dans New Relic, vous donnant ainsi le contrôle sur les informations synchronisées : + +### Types de données disponibles + +* **Référentiel et demande de tirage**: Importer les données du référentiel et de la demande de tirage pour une meilleure visibilité du code et un suivi de la déploiement + +* **Équipes**: Importez les équipes GitHub et leurs membres pour améliorer la modélisation de la gestion et de la propriété des équipes. + + + **Conflits d'intégration d'équipes**: Si des équipes ont déjà été intégrées à New Relic à partir d'une autre source (telle qu'Okta ou un autre fournisseur d'identité), les équipes GitHub ne pourront pas être récupérées et stockées afin d'éviter les conflits de données. Dans ce cas, vous ne pouvez sélectionner que les données du référentiel et de la demande de tirage.\ + **Exigence de visibilité de l'adresse e-mail de l'utilisateur**: Pour garantir que l'appartenance à l'équipe soit alignée sur vos équipes GitHub, l'utilisateur GitHub devra avoir configuré son adresse e-mail comme publique dans les paramètres de son profil GitHub. Les membres de l'équipe disposant d'une configuration de messagerie privée seront exclus du processus de synchronisation des données utilisateur. + + +## Configurer l'intégration GitHub + +1. Accédez à **[one.newrelic.com > + Integration & Agents > GitHub integration](https://one.newrelic.com/marketplace/install-data-source?state=9306060d-b674-b245-083e-ff8d42765e0d)**. + +2. Sélectionnez le compte sur lequel vous souhaitez configurer l'intégration. + +3. Sélectionnez **Set up a new integration**, puis cliquez sur **Continue**. + +4. Sur l’écran **Begin integration** : + + a. Cliquez sur **Get started in GitHub** pour connecter votre compte. L'application d'observabilité New Relic s'ouvre sur le marketplace GitHub. + + b. Terminez l'installation de l'application au sein de votre organisation GitHub. Après l'installation, vous serez redirigé vers l'interface New Relic. + + c. Sélectionnez à nouveau **Démarrer l'intégration**, puis cliquez sur **Continue**. + + d. **Select your data preferences**: choisissez les types de données que vous souhaitez synchroniser : + + * **Teams + Users**: Importez les structures d’équipe GitHub et les informations utilisateur. + * **Repositories + Pull Requests**: Importer les données du référentiel et de la demande de tirage. + * **Both**: Importer tous les types de données disponibles. + + e. Si vous avez sélectionné **Teams + Users**, la liste de toutes les équipes GitHub s'affichera. Sélectionnez toutes les équipes ou une sélection d'entre elles à importer. + + f. Cliquez sur **Start first sync** pour commencer l’importation des données sélectionnées. + + g. Après avoir affiché le message de **Sync started** , cliquez sur **Continue**. L'écran **Integration status** affichera le nombre de vos types de données sélectionnés (équipes, référentiel, etc.), en s'actualisant toutes les 5 secondes. Prévoyez quelques minutes pour l’importation complète de toutes les données. + + GitHub integration + +5. *(Facultatif)* Sur l'écran **GitHub integration**, vous pouvez accéder à vos données importées : + + * Cliquez sur **Go to Teams** pour afficher les équipes importées sur la page [Teams](/docs/service-architecture-intelligence/teams/teams) (si des équipes ont été sélectionnées lors de la configuration). + * Cliquez sur **Go to Repositories** pour afficher les informations de référentiel importées sur le catalogue [Repositories](/docs/service-architecture-intelligence/repositories/repositories) (si le référentiel a été sélectionné lors de l'installation). + +## Gérez votre intégration GitHub + +Après avoir configuré votre intégration GitHub, vous pouvez la gérer via l'interface New Relic. Cela comprend l'actualisation des données, la modification de la configuration et la désinstallation si nécessaire. + +### Gestion de l'intégration des accès + +1. Accédez à **[one.newrelic.com > + Integration & Agents > GitHub integration](https://one.newrelic.com/marketplace/install-data-source?state=9306060d-b674-b245-083e-ff8d42765e0d)**. + +2. À l’étape **Select an action** , sélectionnez **Manage your organization**, puis cliquez sur **Continue**. + + Screenshot showing the manage organization option in GitHub integration + +L'écran **Manage GitHub integration** affiche votre organisation connectée avec son état de synchronisation actuel et ses types de données. + +### Actualiser les données + +L'option « Actualiser les données » offre un moyen simplifié de mettre à jour vos données GitHub dans New Relic. + +**Pour actualiser les données :** + +1. Depuis l’écran **Manage GitHub integration** , recherchez votre organisation. + +2. Cliquez sur **Refresh data** à côté de l'organisation que vous souhaitez mettre à jour, puis cliquez sur **Continue**. + +3. À l’étape **Refresh data** , cliquez sur **Sync on demand**. + +Le système validera ensuite vos autorisations GitHub et l'accès à votre organisation, récupérera uniquement les données nouvelles ou modifiées depuis la dernière synchronisation, traitera et mappera les données mises à jour en fonction des types de données sélectionnés et mettra à jour l'état d'intégration pour refléter le dernier horodatage de synchronisation et le nombre de données. + +**Ce qui est rafraîchi :** + +* Les équipes et leurs membres +* modifications du référentiel (nouveau référentiel (référentiel), référentiel archivé (référentiel), modifications des autorisations) +* Propriété de l'équipe mise à jour via des propriétés personnalisées + + + **Fréquence d'actualisation**: vous pouvez actualiser les données aussi souvent que nécessaire. Le processus prend généralement quelques minutes en fonction de la taille de votre organisation et des types de données sélectionnés. + + +### Modifier les paramètres d'intégration + +Utilisez l'option **Edit** pour modifier la configuration de votre intégration après la configuration initiale. Vous pouvez configurer les types de données synchronisés entre GitHub et New Relic, ainsi que les équipes à synchroniser. + +**Pour modifier l’intégration GitHub :** + +1. Depuis l’écran **Manage GitHub integration** , recherchez votre organisation. + +2. Cliquez sur **Edit** à côté de l’organisation que vous souhaitez mettre à jour, puis cliquez sur **Continue**. + +3. À l'étape **Edit Integration Settings**, ajustez vos sélections selon vos besoins. + +4. Cliquez sur **Save changes** pour appliquer vos mises à jour. + +**Que se passe-t-il pendant l'édition :** + +* Les données actuelles restent intactes lors des modifications de configuration. Si votre sélection d'équipes à synchroniser est différente, la sélection précédente ne sera pas supprimée de New Relic, mais elle ne sera plus synchronisée avec GitHub. Vous pouvez supprimer ces équipes dans la section Équipes. +* Les nouveaux paramètres s'appliquent aux synchronisations ultérieures +* Vous pouvez prévisualiser les modifications avant de les appliquer +* L'intégration continue de fonctionner avec les paramètres précédents jusqu'à ce que vous enregistriez les modifications + +### Configurer l'attribution automatique de la propriété de l'équipe + +Vous pouvez attribuer automatiquement le référentiel GitHub à leurs équipes en ajoutant `teamOwningRepo` comme propriété personnalisée dans GitHub. + +Créez la propriété personnalisée au niveau de l’organisation et attribuez une valeur à la propriété personnalisée au niveau du référentiel. De plus, vous pouvez configurer une propriété personnalisée pour plusieurs référentiels au niveau de l'organisation simultanément. + +Ensuite, dans New Relic Teams, activez la fonctionnalité de propriété automatisée, en veillant à utiliser `team` comme clé de tag. + +Une fois cela mis en place, nous ferons correspondre automatiquement chaque référentiel (référentiel) avec son équipe appropriée. + +Pour plus d'informations sur la création de propriétés personnalisées, reportez-vous à la [documentation GitHub](https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization). + +### Désinstaller l'intégration GitHub + +La désinstallation de l'intégration GitHub interrompt la synchronisation des données de l'organisation sélectionnée. Vous aurez la possibilité de conserver ou de supprimer les données précédemment importées dans New Relic. + +**Pour désinstaller :** + +1. Depuis l’écran **Manage GitHub integration** , recherchez l’organisation que vous souhaitez désinstaller et cliquez sur **Uninstall**. + +2. Dans la boîte de dialogue de confirmation, indiquez si vous souhaitez conserver ou supprimer les données. + +3. Vérifiez les détails et cliquez sur Désinstaller l'organisation pour confirmer. + +4. Vous verrez un message de réussite confirmant la désinstallation. + + + **Rétention des données après désinstallation**: Les données conservées ne seront plus synchronisées avec GitHub et pourront être supprimées manuellement ultérieurement au sein de la plateforme New Relic (par exemple, via la fonctionnalité Teams). + \ No newline at end of file diff --git a/src/i18n/content/fr/docs/service-architecture-intelligence/github-integrations/github-enterprise-integration.mdx b/src/i18n/content/fr/docs/service-architecture-intelligence/github-integrations/github-enterprise-integration.mdx new file mode 100644 index 00000000000..4c717d5f60b --- /dev/null +++ b/src/i18n/content/fr/docs/service-architecture-intelligence/github-integrations/github-enterprise-integration.mdx @@ -0,0 +1,567 @@ +--- +title: Intelligence de l'architecture des services avec GitHub Enterprise (sur site) +tags: + - New Relic integrations + - GitHub Enterprise integration +metaDescription: Integrate your on-premise GitHub Enterprise (GHE) environment with New Relic using a secure collector service and GitHub App for automated data ingestion. +freshnessValidatedDate: never +translationType: machine +--- + + + Nous travaillons toujours sur cette fonctionnalité, mais nous aimerions que vous l'essayiez ! + + Cette fonctionnalité est actuellement fournie dans le cadre d'un programme d'aperçu conformément à nos [politiques de pré-sortie](/docs/licenses/license-information/referenced-policies/new-relic-pre-release-policy). + + +Cherchez-vous à obtenir des informations plus approfondies sur l'architecture de vos services en exploitant les données de votre compte GitHub Enterprise sur site ? L'intégration New Relic GitHub Enterprise importe les référentiels et les équipes directement dans la plateforme New Relic à l'aide d'un service de collecte sécurisé déployé au sein de votre réseau privé. + +Grâce à la nouvelle fonctionnalité d'extraction sélective des données, vous pouvez choisir exactement les types de données à importer, qu'il s'agisse d'équipes, de référentiels et de demandes d'extraction, ou des deux. Cette intégration vise à améliorer la gestion et la visibilité des [Équipes](/docs/service-architecture-intelligence/teams/teams), des [Catalogues](/docs/service-architecture-intelligence/catalogs/catalogs) et des [Tableaux de bord](/docs/service-architecture-intelligence/scorecards/getting-started) au sein de New Relic. Pour plus d'informations, consultez la [fonctionnalité Service Architecture Intelligence](/docs/service-architecture-intelligence/getting-started). + +**Prérequis** + +* Compte GitHub Enterprise sur site avec les privilèges d'administrateur de l'organisation. +* Environnement Docker pour exécuter le service de collecte au sein de votre réseau GitHub Enterprise. +* Compte New Relic avec les autorisations appropriées pour créer des intégrations. + +## Considérations de sécurité + +Cette intégration suit les meilleures pratiques de sécurité : + +* Utilise l'authentification GitHub App avec des autorisations minimales requises +* Les événements Webhook sont authentifiés à l'aide de clés secrètes +* Toutes les transmissions de données se font via HTTPS +* Aucune information d'identification utilisateur n'est stockée ou transmise +* Seules les données des référentiels et des équipes sont importées + +**Pour configurer l'intégration GitHub Enterprise :** + + + + ## Créez et configurez une application GitHub + + Dans votre instance GHE, accédez à **Settings → Developer Settings → GitHub Apps → New GitHub App**. Pour des instructions détaillées sur la création d'une GitHub App, reportez-vous à la [documentation GitHub sur l'enregistrement d'une GitHub App](https://docs.github.com/en/apps/creating-github-apps/registering-a-github-app/registering-a-github-app). + + ### Configurer les permissions + + Configurez avec précision les autorisations de l'application pour garantir une extraction transparente des données lors de la synchronisation initiale et une écoute efficace des événements de webhook par la suite. Les autorisations de l'application définissent l'étendue de l'accès de l'application à diverses ressources de référentiel et d'organisation sur GitHub. En adaptant ces autorisations, vous pouvez améliorer la sécurité, en vous assurant que l'application n'accède qu'aux données nécessaires tout en minimisant l'exposition. Une configuration appropriée facilite la synchronisation initiale des données et la gestion fiable des événements, optimisant ainsi l'intégration de l'application avec l'écosystème de GitHub. + + Pour des conseils détaillés sur les permissions de l'application GitHub, consultez la [documentation GitHub sur la définition des permissions pour les applications GitHub](https://docs.github.com/en/apps/creating-github-apps/setting-up-a-github-app/choosing-permissions-for-a-github-app). + + #### Autorisations de référentiel requises + + Configurez exactement les autorisations au niveau du référentiel comme indiqué pour activer la synchronisation des données : + + * **Administration**: Lecture seule ✓ + * **Checks**: Lecture seule ✓ + * **États de validation**: Sélectionné ✓ + * **Contenu**: Sélectionné ✓ + * **Propriétés personnalisées**: Sélectionné ✓ + * **Deployments**: Lecture seule ✓ + * **Métadonnées**: Lecture seule (obligatoire) ✓ + * **Demandes d'extraction**: Sélectionné ✓ + * **Webhooks**: Lecture seule ✓ + + #### Autorisations d'organisation requises + + Configurez les permissions au niveau de l'organisation suivantes exactement comme indiqué : + + * **Administration**: Lecture seule ✓ + * **Rôles d'organisation personnalisés**: Lecture seule ✓ + * **Propriétés personnalisées**: Lecture seule ✓ + * **Rôles de référentiel personnalisés**: Lecture seule ✓ + * **Événements**: Lecture seule ✓ + * **Membres**: Lecture seule ✓ + * **Webhooks**: Lecture seule ✓ + + #### Abonnements aux événements Webhook + + Sélectionnez exactement les événements de webhook suivants, tels qu'ils sont affichés, pour la synchronisation et le monitoring en temps réel : + + **✓ Sélectionnez ces événements :** + + * `check_run` - Mises à jour de l'état des exécutions de vérification + * `check_suite` - Achèvement de la suite de contrôles + * `commit_comment` - Commentaires sur les commits + * `create` - Création de branche ou de balise + * `custom_property` - Modifications des propriétés personnalisées pour les affectations d’équipe + * `custom_property_values` - Modifications des valeurs des propriétés personnalisées + * `delete` - Suppression de branche ou de balise + * `deployment` - Activités de déploiement + * `deployment_review` - Processus de révision du déploiement + * `deployment_status` - Mises à jour de l'état du déploiement + * `fork` - Événements de fork de référentiel + * `installation_target` - Modifications de l'installation de l'application GitHub + * `label` - Modifications d'étiquettes sur les problèmes et les demandes d'extraction + * `member` - Modifications du profil des membres + * `membership` - Ajouts et suppressions de membres + * `meta` - Modifications des métadonnées de l'application GitHub + * `milestone` - Modifications des jalons + * `organization` - Modifications au niveau de l'organisation + * `public` - Modifications de la visibilité du référentiel + * `pull_request` - Activités de demande d'extraction + * `pull_request_review` - Activités de révision des demandes d'extraction + * `pull_request_review_comment` - Activités de commentaires de révision + * `pull_request_review_thread` - Activités du fil de discussion de révision de la demande d'extraction + * `push` - Poussées et validations de code + * `release` - Publications et mises à jour de versions + * `repository` - Création, suppression et modifications de référentiels + * `star` - Événements d'étoile de référentiel + * `status` - Mises à jour de l'état de validation + * `team` - Création et modifications d'équipes + * `team_add` - Ajouts de membres d'équipe + * `watch` - Événements de monitoring de référentiel + + + **Meilleure pratique de sécurité**: pour réduire l'exposition à la sécurité, suivez le principe du moindre privilège et n'activez que les autorisations minimales requises pour vos besoins d'intégration. + + + ### Configurer les webhooks + + Configurez l'URL du webhook et créez un secret d'événement personnalisé pour une communication sécurisée : + + * **URL du webhook**: Utilisez le format suivant en fonction du déploiement de votre service de collecteur : + + * Pour HTTP : `http://your-domain-name/github/sync/webhook` + * Pour HTTPS : `https://your-domain-name/github/sync/webhook` + + **Exemple**: Si votre service de collecteur est déployé sur `collector.yourcompany.com`, l'URL du webhook serait : `https://collector.yourcompany.com/github/sync/webhook` + + * **Secret d’événement**: Générez une chaîne aléatoire sécurisée (32 caractères ou plus) pour l’authentification du webhook. Enregistrez cette valeur, car vous en aurez besoin pour la variable d’environnement `GITHUB_APP_WEBHOOK_SECRET`. + + ### Générer et convertir des clés + + 1. Après avoir créé l'application GitHub, vous devez générer une clé privée. Dans les paramètres de votre application GitHub, cliquez sur **Generate a private key**. L'application générera et téléchargera automatiquement un ID d'application unique et un fichier de clé privée (format .pem). Enregistrez-les en toute sécurité, car ils seront nécessaires pour la configuration du service de collecte. + + 2. Convertissez votre fichier de clé privée téléchargé au format DER, puis encodez-le en Base64 : + + **Étape 1 : Convertir .pem au format DER** + + ```bash + openssl rsa -outform der -in private-key.pem -out output.der + ``` + + **Étape 2 : Encoder le fichier DER en Base64** + + ```bash + # For Linux/macOS + base64 -i output.der -o outputBase64 + cat outputBase64 # Copy this output + + # For Windows (using PowerShell) + [Convert]::ToBase64String([IO.File]::ReadAllBytes("output.der")) + + # Alternative for Windows (using certutil) + certutil -encode output.der temp.b64 && findstr /v /c:- temp.b64 + ``` + + Copiez la chaîne Base64 résultante et utilisez-la comme valeur pour la variable d'environnement `GITHUB_APP_PRIVATE_KEY` dans votre configuration de collecteur. + + **✓ Indicateurs de réussite :** + + * L’application Github est créée avec succès + * L'ID de l'application et la clé privée sont enregistrés en toute sécurité + * L'URL du webhook est configurée et accessible + + + + ## Préparer les variables d'environnement + + Avant de déployer le service de collecte, rassemblez les informations suivantes : + + ### Variables d'environnement requises + +
+ Solution
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ Variable + + Source + + Comment obtenir +
+ `NR_API_KEY` + + New Relic + + Générez une clé API à partir du tableau de bord New Relic. +
+ `NR_LICENSE_KEY` + + New Relic + + Générez une clé de licence à partir du tableau de bord New Relic. +
+ `GHE_BASE_URL` + + Serveur GHE + + L’URL de base de votre serveur GHE (par exemple, + + `https://source.datanot.us` + + ). +
+ `GITHUB_APP_ID` + + Application GitHub + + L'ID d'application unique généré lors de la création de l'application GitHub. +
+ `GITHUB_APP_PRIVATE_KEY` + + Application GitHub + + Le contenu du fichier de clé privée ( + + `.pem` + + ), converti en chaîne Base64. Consultez l’étape 1 pour les instructions de conversion. +
+ `GITHUB_APP_WEBHOOK_SECRET` + + Application GitHub + + La valeur secrète d'événement personnalisée que vous avez définie lors de la création de l'application GitHub. +
+ + ### Variables d'environnement SSL facultatives + + Les variables d'environnement facultatives suivantes permettent d'effectuer des appels HTTPS d'API. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ Variable facultative + + Source + + Comment obtenir +
+ `SERVER_SSL_KEY_STORE` + + Configuration SSL + + Chemin d'accès au fichier de magasin de clés SSL pour la configuration HTTPS. Consultez les instructions de configuration du certificat SSL ci-dessous. +
+ `SERVER_SSL_KEY_STORE_PASSWORD` + + Configuration SSL + + Mot de passe du fichier de magasin de clés SSL. Il s’agit du mot de passe que vous avez défini lors de la création du magasin de clés PKCS12. +
+ `SERVER_SSL_KEY_STORE_TYPE` + + Configuration SSL + + Type du magasin de clés SSL (par exemple, PKCS12, JKS). Utilisez PKCS12 lorsque vous suivez les instructions de configuration SSL ci-dessous. +
+ `SERVER_SSL_KEY_ALIAS` + + Configuration SSL + + Alias de la clé SSL dans le magasin de clés. C'est le nom que vous spécifiez lors de la création du magasin de clés. +
+ `SERVER_PORT` + + Configuration SSL + + Port serveur pour la communication HTTPS. Utilisez 8443 pour HTTPS. +
+ + ### Instructions de configuration du certificat SSL + + Pour obtenir un certificat SSL auprès d'une autorité de certification (CA) de confiance pour la configuration HTTPS, suivez ces étapes : + + 1. **Générer une clé privée et une demande de signature de certificat (CSR)**: + + ```bash + openssl req -new -newkey rsa:2048 -nodes -keyout mycert.key -out mycert.csr + ``` + + 2. **Soumettez le CSR à la CA de votre choix**: soumettez le fichier `mycert.csr` à l'autorité de certification (par exemple, DigiCert, Let's Encrypt, GoDaddy) de votre choix. + + 3. **Effectuez la validation du domaine**: effectuez toutes les étapes de validation du domaine requises, comme indiqué par la CA. + + 4. **Télécharger le certificat**: Téléchargez les fichiers de certificat émis à partir de l'AC (généralement un fichier `.crt` ou `.pem`). + + 5. **Créer un magasin de clés PKCS12**: Combinez le certificat et la clé privée dans un magasin de clés PKCS12 : + + ```bash + openssl pkcs12 -export -in mycert.crt -inkey mycert.key -out keystore.p12 -name mycert + ``` + + 6. **Utilisez le magasin de clés**: utilisez le fichier `keystore.p12` généré comme valeur pour `SERVER_SSL_KEY_STORE` dans votre configuration Docker. + + + + ## Déployer le service de collecteur + + Le service de collecte est fourni sous forme d'image Docker. Le déploiement peut être effectué de deux manières : + + ### Option A : Utilisation de Docker Compose (recommandé) + + Créez un fichier Docker Compose qui automatise le téléchargement et le déploiement du service. + + 1. Créez un fichier `docker-compose.yml` avec le contenu suivant : + + ```yaml + version: '3.9' + + services: + nr-ghe-collector: + image: newrelic/nr-ghe-collector:tag # use latest tag available in dockerhub starting with v* + container_name: nr-ghe-collector + restart: unless-stopped + ports: + - "8080:8080" # HTTP port, make 8443 in case of HTTPS + environment: + # Required environment variables + - NR_API_KEY=${NR_API_KEY:-DEFAULT_VALUE} + - NR_LICENSE_KEY=${NR_LICENSE_KEY:-DEFAULT_VALUE} + - GHE_BASE_URL=${GHE_BASE_URL:-DEFAULT_VALUE} + - GITHUB_APP_ID=${GITHUB_APP_ID:-DEFAULT_VALUE} + - GITHUB_APP_PRIVATE_KEY=${GITHUB_APP_PRIVATE_KEY:-DEFAULT_VALUE} + - GITHUB_APP_WEBHOOK_SECRET=${GITHUB_APP_WEBHOOK_SECRET:-DEFAULT_VALUE} + + # Optional SSL environment variables (uncomment and configure if using HTTPS) + # - SERVER_SSL_KEY_STORE=${SERVER_SSL_KEY_STORE} + # - SERVER_SSL_KEY_STORE_PASSWORD=${SERVER_SSL_KEY_STORE_PASSWORD} + # - SERVER_SSL_KEY_STORE_TYPE=${SERVER_SSL_KEY_STORE_TYPE} + # - SERVER_SSL_KEY_ALIAS=${SERVER_SSL_KEY_ALIAS} + # - SERVER_PORT=8443 + #volumes: # Uncomment the line below if using SSL keystore + # - ./keystore.p12:/app/keystore.p12 # path to your keystore file + network_mode: bridge + + networks: + nr-network: + driver: bridge + ``` + + 2. Définissez vos variables d’environnement en remplaçant les espaces réservés `DEFAULT_VALUE` dans le fichier Docker Compose par vos valeurs réelles, ou créez des variables d’environnement sur votre système avant d’exécuter la commande. + + + Ne jamais valider les fichiers d'environnement contenant des secrets dans le contrôle de version. Utilisez des pratiques de gestion des secrets sécurisées en production. + + + 3. Exécutez la commande suivante pour démarrer le service : + + ```bash + docker-compose up -d + ``` + + ### Option B : Exécution directe de l'image Docker + + Vous pouvez télécharger l'image Docker directement à partir de notre [registre Docker Hub](https://hub.docker.com/r/newrelic/nr-ghe-collector) et l'exécuter à l'aide du pipeline CI/CD ou de la méthode de déploiement préférée de votre organisation. Notez que le client doit transmettre toutes les variables d'environnement répertoriées ci-dessus lors du démarrage du service de collecte. + + **✓ Indicateurs de réussite :** + + * Le service de collecte est en cours d'exécution et accessible sur le port configuré + * Les logs du conteneur Docker indiquent un démarrage réussi sans erreurs + * Le service répond aux contrôles d'intégrité (si configuré) + + + + ## Installer l'application GitHub sur les organisations + + Une fois le service de collecteur en cours d'exécution, vous devez installer l'application GitHub sur les organisations spécifiques que vous souhaitez intégrer : + + 1. Accédez à votre instance GitHub Enterprise. + 2. Accédez à **Settings** → **Developer Settings** → **GitHub Apps**. + 3. Recherchez l'application GitHub que vous avez créée à l'étape 1 et cliquez dessus. + 4. Dans la barre latérale gauche, cliquez sur **Install App**. + 5. Sélectionnez les organisations dans lesquelles vous souhaitez installer l'application. + 6. Choisissez d’installer sur tous les référentiels ou de sélectionner des référentiels spécifiques. + 7. Cliquez sur **Install** pour terminer l'installation. + + **✓ Indicateurs de réussite :** + + * Les livraisons de webhook apparaissent dans les paramètres de l’application GitHub + * Aucune erreur d’authentification dans les logs du service de collecte + + + + ## Configuration complète de l'intégration dans l'interface utilisateur New Relic + + Une fois le service de collecte en cours d'exécution et l'application GitHub installée sur votre ou vos organisations GHE, terminez la configuration de l'intégration comme indiqué dans l'interface utilisateur de New Relic : + + 1. Les organisations GHE correspondantes apparaîtront dans l'interface utilisateur de New Relic. + + 2. Pour démarrer la synchronisation initiale des données, cliquez sur **First time sync**. + + 3. *(Facultatif)* Cliquez sur **On-demand sync** pour synchroniser manuellement les données. + + + Vous pouvez synchroniser manuellement les données une fois toutes les 4 heures. Le bouton **On-demand sync** reste désactivé si la synchronisation a eu lieu dans les 4 heures précédentes. + + + 4. Après avoir affiché le message Sync started (Synchronisation démarrée), cliquez sur **Continue**. L'écran **GitHub Enterprise Integration** affiche le nombre d'équipes et de référentiels, en actualisant toutes les 5 secondes. Prévoyez 15 à 30 minutes pour l'importation complète de toutes les données (le délai dépend du nombre de référentiels). + + GitHub Enterprise Integration dashboard showing integration progress + + ### Affichage de vos données + + Sur l'écran **GitHub Enterprise Integration** : + + * Pour afficher les informations sur les équipes importées sur [Teams](/docs/service-architecture-intelligence/teams/teams), cliquez sur **Go to Teams**. + * Pour afficher les informations sur les référentiels importés sur [Catalogs](/docs/service-architecture-intelligence/catalogs/catalogs), cliquez sur **Go to Repositories**. + + + + ## Configurer les attributions d'équipe (facultatif) + + Vous pouvez attribuer automatiquement des référentiels GitHub à leurs équipes en ajoutant `teamOwningRepo` en tant que propriété personnalisée dans GitHub Enterprise. + + 1. Créez la propriété personnalisée au niveau de l’organisation et attribuez une valeur à la propriété personnalisée au niveau du référentiel. De plus, vous pouvez configurer une propriété personnalisée pour plusieurs référentiels au niveau de l'organisation simultanément. + 2. Ensuite, dans New Relic Teams, activez la fonctionnalité [Automated Ownership](/docs/service-architecture-intelligence/teams/manage-teams/#assign-ownership), en veillant à utiliser `team` comme clé de tag. + + Une fois cette opération configurée, New Relic associe automatiquement chaque référentiel à son équipe correcte. + + Pour plus d'informations sur la création de propriétés personnalisées, reportez-vous à la [documentation GitHub](https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization). + + + +## Dépannage + +### Problèmes courants et solutions + +**Échecs de livraison de webhook :** + +* Vérifiez que le service de collecte est en cours d'exécution et accessible depuis GitHub Enterprise +* Vérifiez les paramètres du pare-feu et la connectivité réseau + +**Erreurs d'authentification :** + +* Vérifiez que l’ID de l’application GitHub et la clé privée sont correctement configurés +* Assurez-vous que la clé privée est correctement convertie au format DER et encodée en Base64 +* Vérifiez que le secret du webhook correspond entre GitHub App et la configuration du collecteur + +**Échecs de synchronisation :** + +* Vérifiez que l'application GitHub dispose des permissions requises +* Vérifiez que l'application est installée sur les organisations appropriées +* Consultez les logs du service de collecte pour des messages d'erreur spécifiques + +**Problèmes de connectivité réseau :** + +* Assurez-vous que le service de collecte peut atteindre votre instance GitHub Enterprise +* Vérifiez que les certificats SSL sont correctement configurés si vous utilisez HTTPS +* Vérifiez la résolution DNS pour votre domaine GitHub Enterprise + +## Désinstallation + +Pour désinstaller l'intégration GitHub Enterprise : + +1. Accédez à l'interface utilisateur de votre GitHub Enterprise. +2. Accédez aux paramètres de l'organisation où l'application est installée. +3. Désinstallez l’application GitHub directement à partir de l’interface GitHub Enterprise. Cette action déclenchera le processus backend pour cesser la collecte de données. +4. Arrêtez et supprimez le service de collecte de votre environnement Docker. \ No newline at end of file diff --git a/src/i18n/content/jp/docs/cci/azure-cci.mdx b/src/i18n/content/jp/docs/cci/azure-cci.mdx index 90bafd86435..04d65d08553 100644 --- a/src/i18n/content/jp/docs/cci/azure-cci.mdx +++ b/src/i18n/content/jp/docs/cci/azure-cci.mdx @@ -395,7 +395,7 @@ Azure を Cloud Cost Intelligence に接続する前に、次のものを用意 - コンテナ内で課金データが保存される基本パスを入力します (例: 2025 年 10 月の場合は`20251001-20251031` )。**注**: 課金エクスポートをコンテナのルートに直接公開する場合は、このフィールドを空のままにしておきます。 + 月単位で課金データが表示されるコンテナ内の相対パスを入力します (たとえば、2025 年 11 月の場合は`20251101-20251130` 、2025 年 12 月の場合は`20251201-20251231`)。**注**: 課金エクスポートをコンテナのルートに直接公開する場合は、このフィールドを空のままにしておきます。 diff --git a/src/i18n/content/jp/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx b/src/i18n/content/jp/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx index acf4ff6a16f..1abb10b5176 100644 --- a/src/i18n/content/jp/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx +++ b/src/i18n/content/jp/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx @@ -26,28 +26,26 @@ translationType: machine ## Snowflakeメトリクスのセットアップ - 以下のコマンドを実行して、Snowflake メトリックを JSON 形式で保存し、nri-flex が読み取れるようにします。 ACCOUNT、USERNAME、SNOWSQL\_PWD を適宜変更してください。 + 以下のコマンドを実行して、Snowflake メトリクスを JSON 形式で保存し、nri-flex が読み取れるようにします。 `ACCOUNT` 、 `USERNAME` 、 `SNOWSQL_PWD`を適宜変更してください。 ```shell - - # Run the below command as a 1 minute cronjob - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SERVICE_TYPE, NAME, AVG(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_AVERAGE", SUM(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_SUM", AVG(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_AVERAGE", SUM(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_SUM", AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."METERING_HISTORY" WHERE start_time >= DATE_TRUNC(day, CURRENT_DATE()) GROUP BY 1, 2;' > /tmp/snowflake-account-metering.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, AVG(AVG_RUNNING) AS "RUNNING_AVERAGE", AVG(AVG_QUEUED_LOAD) AS "QUEUED_LOAD_AVERAGE", AVG(AVG_QUEUED_PROVISIONING) AS "QUEUED_PROVISIONING_AVERAGE", AVG(AVG_BLOCKED) AS "BLOCKED_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_LOAD_HISTORY" GROUP BY 1;' > /tmp/snowflake-warehouse-load-history-metrics.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, avg(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_AVERAGE", sum(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_SUM", avg(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_AVERAGE", sum(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_SUM", avg(CREDITS_USED) as "CREDITS_USED_AVERAGE", sum(CREDITS_USED) as "CREDITS_USED_SUM" from "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" group by 1;' > /tmp/snowflake-warehouse-metering.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT table_name, table_schema, avg(ACTIVE_BYTES) as "ACTIVE_BYTES_AVERAGE", avg(TIME_TRAVEL_BYTES) as "TIME_TRAVEL_BYTES_AVERAGE", avg(FAILSAFE_BYTES) as "FAILSAFE_BYTES_AVERAGE", avg(RETAINED_FOR_CLONE_BYTES) as "RETAINED_FOR_CLONE_BYTES_AVERAGE" from "SNOWFLAKE"."ACCOUNT_USAGE"."TABLE_STORAGE_METRICS" group by 1, 2;' > /tmp/snowflake-table-storage-metrics.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT STORAGE_BYTES, STAGE_BYTES, FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STORAGE_USAGE" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-storage-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT USAGE_DATE, AVG(AVERAGE_STAGE_BYTES) FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STAGE_STORAGE_USAGE_HISTORY" GROUP BY USAGE_DATE;' > /tmp/snowflake-stage-storage-usage-history.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."REPLICATION_USAGE_HISTORY" GROUP BY DATABASE_NAME;' > /tmp/snowflake-replication-usage-history.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(EXECUTION_TIME) AS "EXECUTION_TIME_AVERAGE", AVG(COMPILATION_TIME) AS "COMPILATION_TIME_AVERAGE", AVG(BYTES_SCANNED) AS "BYTES_SCANNED_AVERAGE", AVG(BYTES_WRITTEN) AS "BYTES_WRITTEN_AVERAGE", AVG(BYTES_DELETED) AS "BYTES_DELETED_AVERAGE", AVG(BYTES_SPILLED_TO_LOCAL_STORAGE) AS "BYTES_SPILLED_TO_LOCAL_STORAGE_AVERAGE", AVG(BYTES_SPILLED_TO_REMOTE_STORAGE) AS "BYTES_SPILLED_TO_REMOTE_STORAGE_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" GROUP BY QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME;' > /tmp/snowflake-query-history.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT PIPE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_INSERTED) AS "BYTES_INSERTED_AVERAGE", SUM(BYTES_INSERTED) AS "BYTES_INSERTED_SUM", AVG(FILES_INSERTED) AS "FILES_INSERTED_AVERAGE", SUM(FILES_INSERTED) AS "FILES_INSERTED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."PIPE_USAGE_HISTORY" GROUP BY PIPE_NAME;' > /tmp/snowflake-pipe-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_ID, QUERY_TEXT, (EXECUTION_TIME / 60000) AS EXEC_TIME, WAREHOUSE_NAME, USER_NAME, EXECUTION_STATUS FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" WHERE EXECUTION_STATUS = '\''SUCCESS'\'' ORDER BY EXECUTION_TIME DESC;' > /tmp/snowflake-longest-queries.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT EVENT_ID, EVENT_TIMESTAMP, EVENT_TYPE, REPORTED_CLIENT_TYPE, REPORTED_CLIENT_VERSION, FIRST_AUTHENTICATION_FACTOR, SECOND_AUTHENTICATION_FACTOR, IS_SUCCESS, ERROR_CODE, ERROR_MESSAGE FROM "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY" WHERE IS_SUCCESS = '\''NO'\'';' > /tmp/snowflake-login-failures.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVERAGE_DATABASE_BYTES, AVERAGE_FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATABASE_STORAGE_USAGE_HISTORY" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-database-storage-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SOURCE_CLOUD, SOURCE_REGION, TARGET_CLOUD, TARGET_REGION, TRANSFER_TYPE, AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATA_TRANSFER_HISTORY" GROUP BY 1, 2, 3, 4, 5;' > /tmp/snowflake-data-transfer-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, SUM(CREDITS_USED) AS TOTAL_CREDITS_USED FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" GROUP BY 1 ORDER BY 2 DESC;' > /tmp/snowflake-credit-usage-by-warehouse.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT TABLE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_AVERAGE", SUM(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_SUM", AVG(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_AVERAGE", SUM(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."AUTOMATIC_CLUSTERING_HISTORY" GROUP BY 1, 2, 3;' > /tmp/snowflake-automatic-clustering.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'select USER_NAME,EVENT_TYPE,IS_SUCCESS,ERROR_CODE,ERROR_MESSAGE,FIRST_AUTHENTICATION_FACTOR,SECOND_AUTHENTICATION_FACTOR from "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY";' > /tmp/snowflake-account-details.json - + # Run the below command as a 1 minute cronjob + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SERVICE_TYPE, NAME, AVG(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_AVERAGE", SUM(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_SUM", AVG(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_AVERAGE", SUM(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_SUM", AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."METERING_HISTORY" WHERE start_time >= DATE_TRUNC(day, CURRENT_DATE()) GROUP BY 1, 2;' > /tmp/snowflake-account-metering.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, AVG(AVG_RUNNING) AS "RUNNING_AVERAGE", AVG(AVG_QUEUED_LOAD) AS "QUEUED_LOAD_AVERAGE", AVG(AVG_QUEUED_PROVISIONING) AS "QUEUED_PROVISIONING_AVERAGE", AVG(AVG_BLOCKED) AS "BLOCKED_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_LOAD_HISTORY" GROUP BY 1;' > /tmp/snowflake-warehouse-load-history-metrics.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, avg(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_AVERAGE", sum(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_SUM", avg(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_AVERAGE", sum(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_SUM", avg(CREDITS_USED) as "CREDITS_USED_AVERAGE", sum(CREDITS_USED) as "CREDITS_USED_SUM" from "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" group by 1;' > /tmp/snowflake-warehouse-metering.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT table_name, table_schema, avg(ACTIVE_BYTES) as "ACTIVE_BYTES_AVERAGE", avg(TIME_TRAVEL_BYTES) as "TIME_TRAVEL_BYTES_AVERAGE", avg(FAILSAFE_BYTES) as "FAILSAFE_BYTES_AVERAGE", avg(RETAINED_FOR_CLONE_BYTES) as "RETAINED_FOR_CLONE_BYTES_AVERAGE" from "SNOWFLAKE"."ACCOUNT_USAGE"."TABLE_STORAGE_METRICS" group by 1, 2;' > /tmp/snowflake-table-storage-metrics.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT STORAGE_BYTES, STAGE_BYTES, FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STORAGE_USAGE" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-storage-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT USAGE_DATE, AVG(AVERAGE_STAGE_BYTES) FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STAGE_STORAGE_USAGE_HISTORY" GROUP BY USAGE_DATE;' > /tmp/snowflake-stage-storage-usage-history.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."REPLICATION_USAGE_HISTORY" GROUP BY DATABASE_NAME;' > /tmp/snowflake-replication-usage-history.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(EXECUTION_TIME) AS "EXECUTION_TIME_AVERAGE", AVG(COMPILATION_TIME) AS "COMPILATION_TIME_AVERAGE", AVG(BYTES_SCANNED) AS "BYTES_SCANNED_AVERAGE", AVG(BYTES_WRITTEN) AS "BYTES_WRITTEN_AVERAGE", AVG(BYTES_DELETED) AS "BYTES_DELETED_AVERAGE", AVG(BYTES_SPILLED_TO_LOCAL_STORAGE) AS "BYTES_SPILLED_TO_LOCAL_STORAGE_AVERAGE", AVG(BYTES_SPILLED_TO_REMOTE_STORAGE) AS "BYTES_SPILLED_TO_REMOTE_STORAGE_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" GROUP BY QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME;' > /tmp/snowflake-query-history.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT PIPE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_INSERTED) AS "BYTES_INSERTED_AVERAGE", SUM(BYTES_INSERTED) AS "BYTES_INSERTED_SUM", AVG(FILES_INSERTED) AS "FILES_INSERTED_AVERAGE", SUM(FILES_INSERTED) AS "FILES_INSERTED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."PIPE_USAGE_HISTORY" GROUP BY PIPE_NAME;' > /tmp/snowflake-pipe-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_ID, QUERY_TEXT, (EXECUTION_TIME / 60000) AS EXEC_TIME, WAREHOUSE_NAME, USER_NAME, EXECUTION_STATUS FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" WHERE EXECUTION_STATUS = '\''SUCCESS'\'' ORDER BY EXECUTION_TIME DESC;' > /tmp/snowflake-longest-queries.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT EVENT_ID, EVENT_TIMESTAMP, EVENT_TYPE, REPORTED_CLIENT_TYPE, REPORTED_CLIENT_VERSION, FIRST_AUTHENTICATION_FACTOR, SECOND_AUTHENTICATION_FACTOR, IS_SUCCESS, ERROR_CODE, ERROR_MESSAGE FROM "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY" WHERE IS_SUCCESS = '\''NO'\'';' > /tmp/snowflake-login-failures.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVERAGE_DATABASE_BYTES, AVERAGE_FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATABASE_STORAGE_USAGE_HISTORY" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-database-storage-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SOURCE_CLOUD, SOURCE_REGION, TARGET_CLOUD, TARGET_REGION, TRANSFER_TYPE, AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATA_TRANSFER_HISTORY" GROUP BY 1, 2, 3, 4, 5;' > /tmp/snowflake-data-transfer-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, SUM(CREDITS_USED) AS TOTAL_CREDITS_USED FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" GROUP BY 1 ORDER BY 2 DESC;' > /tmp/snowflake-credit-usage-by-warehouse.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT TABLE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_AVERAGE", SUM(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_SUM", AVG(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_AVERAGE", SUM(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."AUTOMATIC_CLUSTERING_HISTORY" GROUP BY 1, 2, 3;' > /tmp/snowflake-automatic-clustering.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'select USER_NAME,EVENT_TYPE,IS_SUCCESS,ERROR_CODE,ERROR_MESSAGE,FIRST_AUTHENTICATION_FACTOR,SECOND_AUTHENTICATION_FACTOR from "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY";' > /tmp/snowflake-account-details.json ``` @@ -59,130 +57,126 @@ translationType: machine 1. Integration ディレクトリに`nri-snowflake-config.yml`という名前のファイルを作成します。 ```shell - - touch /etc/newrelic-infra/integrations.d/nri-snowflake-config.yml - + touch /etc/newrelic-infra/integrations.d/nri-snowflake-config.yml ``` 2. エージェントが Snowflake データをキャプチャできるようにするには、次のスニペットを`nri-snowflake-config.yml`ファイルに追加します。 ```yml - - --- - integrations: - - name: nri-flex - interval: 30s - config: - name: snowflakeAccountMetering - apis: - - name: snowflakeAccountMetering - file: /tmp/snowflake-account-metering.json - - name: nri-flex - interval: 30s - config: - name: snowflakeWarehouseLoadHistory - apis: - - name: snowflakeWarehouseLoadHistory - file: /tmp/snowflake-warehouse-load-history-metrics.json - - name: nri-flex - interval: 30s - config: - name: snowflakeWarehouseMetering - apis: - - name: snowflakeWarehouseMetering - file: /tmp/snowflake-warehouse-metering.json - - name: nri-flex - interval: 30s - config: - name: snowflakeTableStorage - apis: - - name: snowflakeTableStorage - file: /tmp/snowflake-table-storage-metrics.json - - name: nri-flex - interval: 30s - config: - name: snowflakeStageStorageUsage - apis: - - name: snowflakeStageStorageUsage - file: /tmp/snowflake-stage-storage-usage-history.json - - name: nri-flex - interval: 30s - config: - name: snowflakeReplicationUsgae - apis: - - name: snowflakeReplicationUsgae - file: /tmp/snowflake-replication-usage-history.json - - name: nri-flex - interval: 30s - config: - name: snowflakeQueryHistory - apis: - - name: snowflakeQueryHistory - file: /tmp/snowflake-query-history.json - - name: nri-flex - interval: 30s - config: - name: snowflakePipeUsage - apis: - - name: snowflakePipeUsage - file: /tmp/snowflake-pipe-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeLongestQueries - apis: - - name: snowflakeLongestQueries - file: /tmp/snowflake-longest-queries.json - - name: nri-flex - interval: 30s - config: - name: snowflakeLoginFailure - apis: - - name: snowflakeLoginFailure - file: /tmp/snowflake-login-failures.json - - name: nri-flex - interval: 30s - config: - name: snowflakeDatabaseStorageUsage - apis: - - name: snowflakeDatabaseStorageUsage - file: /tmp/snowflake-database-storage-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeDataTransferUsage - apis: - - name: snowflakeDataTransferUsage - file: /tmp/snowflake-data-transfer-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeCreditUsageByWarehouse - apis: - - name: snowflakeCreditUsageByWarehouse - file: /tmp/snowflake-credit-usage-by-warehouse.json - - name: nri-flex - interval: 30s - config: - name: snowflakeAutomaticClustering - apis: - - name: snowflakeAutomaticClustering - file: /tmp/snowflake-automatic-clustering.json - - name: nri-flex - interval: 30s - config: - name: snowflakeStorageUsage - apis: - - name: snowflakeStorageUsage - file: /tmp/snowflake-storage-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeAccountDetails - apis: - - name: snowflakeAccountDetails - file: /tmp/snowflake-account-details.json - + --- + integrations: + - name: nri-flex + interval: 30s + config: + name: snowflakeAccountMetering + apis: + - name: snowflakeAccountMetering + file: /tmp/snowflake-account-metering.json + - name: nri-flex + interval: 30s + config: + name: snowflakeWarehouseLoadHistory + apis: + - name: snowflakeWarehouseLoadHistory + file: /tmp/snowflake-warehouse-load-history-metrics.json + - name: nri-flex + interval: 30s + config: + name: snowflakeWarehouseMetering + apis: + - name: snowflakeWarehouseMetering + file: /tmp/snowflake-warehouse-metering.json + - name: nri-flex + interval: 30s + config: + name: snowflakeTableStorage + apis: + - name: snowflakeTableStorage + file: /tmp/snowflake-table-storage-metrics.json + - name: nri-flex + interval: 30s + config: + name: snowflakeStageStorageUsage + apis: + - name: snowflakeStageStorageUsage + file: /tmp/snowflake-stage-storage-usage-history.json + - name: nri-flex + interval: 30s + config: + name: snowflakeReplicationUsgae + apis: + - name: snowflakeReplicationUsgae + file: /tmp/snowflake-replication-usage-history.json + - name: nri-flex + interval: 30s + config: + name: snowflakeQueryHistory + apis: + - name: snowflakeQueryHistory + file: /tmp/snowflake-query-history.json + - name: nri-flex + interval: 30s + config: + name: snowflakePipeUsage + apis: + - name: snowflakePipeUsage + file: /tmp/snowflake-pipe-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeLongestQueries + apis: + - name: snowflakeLongestQueries + file: /tmp/snowflake-longest-queries.json + - name: nri-flex + interval: 30s + config: + name: snowflakeLoginFailure + apis: + - name: snowflakeLoginFailure + file: /tmp/snowflake-login-failures.json + - name: nri-flex + interval: 30s + config: + name: snowflakeDatabaseStorageUsage + apis: + - name: snowflakeDatabaseStorageUsage + file: /tmp/snowflake-database-storage-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeDataTransferUsage + apis: + - name: snowflakeDataTransferUsage + file: /tmp/snowflake-data-transfer-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeCreditUsageByWarehouse + apis: + - name: snowflakeCreditUsageByWarehouse + file: /tmp/snowflake-credit-usage-by-warehouse.json + - name: nri-flex + interval: 30s + config: + name: snowflakeAutomaticClustering + apis: + - name: snowflakeAutomaticClustering + file: /tmp/snowflake-automatic-clustering.json + - name: nri-flex + interval: 30s + config: + name: snowflakeStorageUsage + apis: + - name: snowflakeStorageUsage + file: /tmp/snowflake-storage-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeAccountDetails + apis: + - name: snowflakeAccountDetails + file: /tmp/snowflake-account-details.json ``` @@ -192,9 +186,7 @@ translationType: machine インフラストラクチャ エージェントを再起動します。 ```shell - sudo systemctl restart newrelic-infra.service - ``` 数分以内に、アプリケーションはメトリクスを [one.newrelic.com](https://one.newrelic.com)に送信します。 @@ -215,9 +207,7 @@ translationType: machine 以下は、Snowflake メトリックを確認するためのNRQLクエリです。 ```sql - - SELECT * from snowflakeAccountSample - + SELECT * FROM snowflakeAccountSample ``` diff --git a/src/i18n/content/jp/docs/logs/forward-logs/azure-log-forwarding.mdx b/src/i18n/content/jp/docs/logs/forward-logs/azure-log-forwarding.mdx index 6355ef6d588..07ec797050e 100644 --- a/src/i18n/content/jp/docs/logs/forward-logs/azure-log-forwarding.mdx +++ b/src/i18n/content/jp/docs/logs/forward-logs/azure-log-forwarding.mdx @@ -36,19 +36,80 @@ Azure ログをNew Relicに転送すると、ログ データの収集、処理 以下の手順に従ってください。 1. があることを確認してください。 + 2. **[one.newrelic.com](https://one.newrelic.com/launcher/logger.log-launcher)**から、左側のナビゲーションにある**Integrations & Agents**をクリックします。 + 3. **Logging**カテゴリで、データ ソースのリストにある**Microsoft Azure Event Hub**タイルをクリックします。 + 4. ログを送信するアカウントを選択し、 **Continue**をクリックします。 + 5. **Generate API key**をクリックして、生成された API キーをコピーします。 + 6. **Deploy to Azure**をクリックすると、新しいタブが開き、Azure にロードされた ARM テンプレートが表示されます。 + 7. 必要なリソースを作成する**Resource group****Region**を選択します。 必須ではありませんが、作成されたコンポーネントが誤って削除されないように、テンプレートを新しいリソース グループにインストールすることをお勧めします。 + 8. **New Relic license key**フィールドに、前にコピーした API キーを貼り付けます。 + 9. [NewRelicエンドポイント](/docs/logs/log-api/introduction-log-api/#endpoint)がアカウントに対応するエンドポイントに設定されていることを確認します。 -10. オプション:転送する[Azureサブスクリプションアクティビティログ](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log)を`true`に設定します。詳細については、このドキュメント[のサブスクリプション情報](#subscription-activity-logs)を参照してください。 -11. **Review + create**をクリックし、挿入したデータを確認して、 **Create**をクリックします。 + +10. スケーリングモードを選択します。デフォルトは`Basic`です。 + +11. オプション: EventHub バッチ処理 (v2.8.0 以降で利用可能) を構成して、パフォーマンスを最適化します。 + + * **最大イベント バッチ サイズ**: バッチあたりの最大イベント (デフォルト: 500、最小: 1) + * **最小イベント バッチ サイズ**: バッチあたりの最小イベント (デフォルト: 20、最小: 1) + * **最大待機時間**: バッチを構築するまでの最大待機時間 (HH:MM:SS 形式) (デフォルト: 00:00:30) + +12. オプション:転送する[Azureサブスクリプションアクティビティログ](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log)を`true`に設定します。詳細については、このドキュメント[のサブスクリプション情報](#subscription-activity-logs)を参照してください。 + +13. **Review + create**をクリックし、挿入したデータを確認して、 **Create**をクリックします。 このテンプレートはべき乗であることに注意してください。Event Hub からログの転送を開始してから、同じテンプレートを再実行して、 [Azure Subscription Activity Logs](#subscription-activity-logs) の転送を設定するには、手順 10 を完了します。 +### EventHub のバッチ処理とスケーリングを構成する (オプション) [#eventhub-configuration] + +バージョン 2.8.0 以降、ARM テンプレートは、パフォーマンスとスループットを最適化するための高度な EventHub 設定オプションをサポートしています。 + +**EventHub トリガーのバッチ処理の問題:** + +バッチ処理の動作を構成して、イベントの処理方法を制御できます。これらの設定は、Azure Function アプリケーション設定として構成されます。 + +* **Max Event Batch Size** : 関数にバッチで配信されるイベントの最大数 (デフォルト: 500、最小: 1)。 一緒に処理されるイベントの上限を制御します。 + +* **最小イベント バッチ サイズ**: 関数にバッチで配信されるイベントの最小数 (デフォルト: 20、最小: 1)。 最大待機時間に達しない限り、関数は処理の前に少なくともこの数のイベントが蓄積されるまで待機します。 + +* **最大待機時間**: 関数に配信する前にバッチを構築するのを待機する最大時間 (デフォルト: 00:00:30、形式: HH:MM:SS)。これにより、イベントの量が少ない場合でも、タイムリーな処理が保証されます。 + +これらの課題は、ログのボリュームと処理要件に基づいてスループットとリソースの使用率を最適化するのに役立ちます。 具体的な使用例に応じて、これらの値を調整します。 + +* 大量処理シナリオではバッチサイズを増やしてスループットを向上させる +* 低レイテンシ要件に合わせてバッチサイズを縮小 +* 待ち時間を調整してレイテンシとバッチ効率のバランスを取る + +**スケーリング設定 (v2.7.0+):** + +テンプレートは、Azure Functions のスケーリング モードの構成をサポートしており、ワークロードに基づいてコストとパフォーマンスを最適化できます。 + +* **基本スケーリング モード**: 既定で動的 SKU (Y1 ティア) 消費ベースのプランを使用します。この場合、 Azure受信イベントの数に基づいて関数インスタンスを自動的に追加および削除します。 + + * `disablePublicAccessToStorageAccount`オプションが有効な場合、Basic SKU (B1 ティア) プランを使用して VNet 統合をサポートします。 + * このモードは変動するワークロードに最適で、実行ごとの料金設定による自動コスト最適化を提供します。 + * EventHub ネームスペースには、標準のスループット ユニット スケーリングを備えた 4 つのパーティションが含まれています。 + +* **エンタープライズ スケーリング モード**: 専用の計算リソースによる高度なスケーリング機能と、インスタンス スケーリングのより詳細な制御を提供します。 このモードでは以下が提供されます: + + * Function App と EventHub の両方に対する自動スケーリング機能。 + * サイトごとのスケーリングが有効な Elastic Premium (EP1) ホスティング プラン + * EventHub の自動インフレが最大スループット ユニット 40 で有効になりました + * 並列性を向上させるためにパーティション数を増やしました(基本モードでは 4 パーティションに対して 32 パーティション)。 + * 事前ウォーミングされたインスタンスによる予測可能なパフォーマンスと低レイテンシ + * 大容量でミッションクリティカルなログ転送シナリオに適しています + +**重要な注意事項:** + +* Basic モードから Enterprise モードにアップグレードする場合、Standard SKU では作成後にパーティション数を変更できないという Azure の制限により、EventHub を再プロビジョニングする必要があります。 + ### オプション: サブスクリプションから Azure アクティビティ ログを送信する [#subscription-activity-logs] diff --git a/src/i18n/content/jp/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx b/src/i18n/content/jp/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx index 21871c79509..8923a277d8d 100644 --- a/src/i18n/content/jp/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx +++ b/src/i18n/content/jp/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx @@ -95,7 +95,11 @@ ANR エラーが発生すると、 Androidスタックトレースをキャプ **難読化解除:** -New Relic現在、プラットフォーム内で ANR スタックトレースを自動的に難読化解除しません。 この機能のサポートは将来のリリースで予定されています。それまでの間、難読化された ANR スタックトレースをNew Relicからダウンロードし、Proguard/R8 の`ndk-stack`または`retrace`ユーティリティなどのオフライン ツールを使用して、スタックトレースを手動でシンボライズできます。 +New Relic ANR スタックトレース内のJavaスタック フレームを自動的にシンボル化し、読み取り可能なメソッド名と行番号をプラットフォームに直接提供します。 + + + ネイティブ (NDK) スタック フレームは現在シンボル化されていません。ネイティブ スタック フレームの場合、 New Relicからスタックトレースをダウンロードし、 `ndk-stack`などのオフライン ツールを使用して手動でシンボル化できます。 + ## ANR監視を無効にする [#disable-anr-monitoring] diff --git a/src/i18n/content/jp/docs/release-notes/docs-release-notes/docs-12-19-2025.mdx b/src/i18n/content/jp/docs/release-notes/docs-release-notes/docs-12-19-2025.mdx new file mode 100644 index 00000000000..b2043de91e1 --- /dev/null +++ b/src/i18n/content/jp/docs/release-notes/docs-release-notes/docs-12-19-2025.mdx @@ -0,0 +1,79 @@ +--- +subject: Docs +releaseDate: '2025-12-19' +version: 'December 15 - December 19, 2025' +translationType: machine +--- + +### 新しいドキュメント + +* ユーザー エクスペリエンスに対するフラストレーションのシグナルとパフォーマンスへの影響を理解するための包括的なガイダンスを提供するために、[ユーザーへの影響を](/docs/browser/new-relic-browser/browser-pro-features/user-impact)追加しました。 + +### 主な変更点 + +* ワークフロー アクションの大規模な再構成と組織化により、[アクション カタログが](/docs/workflow-automation/setup-and-configuration/actions-catalog)更新されました。 +* [Browserログの更新: 自動および手動のログ キャプチャ更新を開始します](/docs/browser/browser-monitoring/browser-pro-features/browser-logs/get-started)。 +* 更新された [ページ ビュー: フラストレーション シグナルとパフォーマンス影響情報を使用してページのパフォーマンスを調べます](/docs/browser/new-relic-browser/browser-pro-features/page-views-examine-page-performance)。 +* SAP ソリューション データ プロバイダーに関する詳細なガイダンスを提供するために、[データ プロバイダー参照を](/docs/sap-solutions/additional-resources/data-providers-reference)追加しました。 + +### マイナーチェンジ + +* eBPF フィルターの設定ドキュメントを[Kubernetesへの eBPF ネットワーク オブザーバビリティのインストール](/docs/ebpf/k8s-installation)および[Linux への eBPF ネットワーク オブザーバビリティのインストール に](/docs/ebpf/linux-installation)追加しました。 +* 強化されたセットアップ手順により、 [Agentic AI モデルのコンテキスト プロトコル セットアップ](/docs/agentic-ai/mcp/setup)が更新されました。 +* Kinesis Data Streams および Drupal 11.1/11.2 での[PHP エージェントの互換性と要件](/docs/apm/agents/php-agent/getting-started/php-agent-compatibility-requirements)を更新しました互換性。 +* 依存関係の最新の検証済み互換バージョンを使用して、 [.NET エージェントの互換性と要件](/docs/apm/agents/net-agent/getting-started/net-agent-compatibility-requirements)を更新しました。 +* 最新の互換性レポートに従って、 [Node.js エージェントの互換性と要件](/docs/apm/agents/nodejs-agent/getting-started/compatibility-requirements-nodejs-agent)を更新しました。 +* 現在の互換性情報に基づいて、 [Java エージェントの互換性と要件](/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent)を更新しました。 +* [Pythonを使用した強化された計装AWS Lambda関数。](/docs/serverless-function-monitoring/azure-function-monitoring/container)コンテナ化されたAzure Functions の明示的なインストール コマンドを使用します。 +* kTranslate の最新の Ubuntu バージョンのサポートにより、[ネットワーク フロー監視](/docs/network-performance-monitoring/setup-performance-monitoring/network-flow-monitoring)が更新されました。 +* 新しいコンテナ関数のサポートを反映するために、 [Lambda の APM エクスペリエンスへのアップグレードを](/docs/serverless-function-monitoring/aws-lambda-monitoring/instrument-lambda-function/upgrade-to-apm-experience)更新しました。 +* 以下の新着投稿を追加しました: + * [Transaction 360](/whats-new/2025/12/whats-new-12-15-transaction-360) + +### リリースノート + +* 弊社の最新リリース情報を常に把握してください: + + * [PHPエージェント v12.3.0.28](/docs/release-notes/agent-release-notes/php-release-notes/php-agent-12-3-0-28) : + + * AWS-sdk-php Kinesis Data Streams の計装を追加しました。 + * 再起動時にデーモンがパッケージ キャッシュをクリアしない問題を修正しました。 + * golang バージョンを 1.25.5 に上げました。 + + * [Node.jsエージェント v13.8.1](/docs/release-notes/agent-release-notes/nodejs-release-notes/node-agent-13-8-1) : + * ラッピング ハンドラー コールバックが存在しない場合はスキップするようにAWS Lambda計装を更新しました。 + + * [Javaエージェント v8.25.1](/docs/release-notes/agent-release-notes/java-release-notes/java-agent-8251) : + * `CancellableContinuation`のサードパーティ実装に関する Kotlin コルーチン エラーを修正しました。 + + * [Browser v1.306.0](/docs/release-notes/new-relic-browser-release-notes/browser-agent-release-notes/browser-agent-v1.306.0) : + + * 個別の RUM フラグを通じてログ API の制御を追加しました。 + * onTTFB に依存する前に responseStart の検証を強化しました。 + * webpack 出力から改行構文を削除しました。 + + * [Kubernetesインテグレーション v3.51.1](/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-1) : + * チャートバージョン newrelic-インフラストラクチャ-3.56.1 および nri-bundle-6.0.30 でリリースされました。 + + * [NRDOT v1.7.0](/docs/release-notes/nrdot-release-notes/nrdot-2025-12-15) : + * ohi コンポーネントを nrdot-Collector-experimental ディストリビューションに追加しました。 + + * [NRDOT v1.6.0](/docs/release-notes/nrdot-release-notes/nrdot-2025-12-12) : + + * otel コンポーネントのバージョンを v0.135.0 から v0.141.0 に上げました。 + * golang 1.24.11 にアップグレードして CVE-2025-61729 を修正しました。 + * 0.119.0 の transformprocessor 構成の非推奨に対処しました。 + + * [ジョブマネージャーリリース493](/docs/release-notes/synthetics-release-notes/job-manager-release-notes/job-manager-release-493) : + + * 最小 API バージョンが 1.44 に更新されたために発生した Docker 29 互換性の問題を修正しました。 + * 失敗したジョブの結果をカバーするために機密情報のデータマスキングを追加しました。 + + * [Node Browserランタイム rc1.5](/docs/release-notes/synthetics-release-notes/node-browser-runtime-release-notes/node-browser-runtime-rc1.5) : + * 最新の変更を反映したリリースを更新しました。 + + * [Node API ランタイム rc1.5](/docs/release-notes/synthetics-release-notes/node-api-runtime-release-notes/node-api-runtime-rc1.5) : + * 最新の変更を反映したリリースを更新しました。 + + * [Node Browserランタイム rc1.6](/docs/release-notes/synthetics-release-notes/node-browser-runtime-release-notes/node-browser-runtime-rc1.6) : + * 最新の変更を反映したリリースを更新しました。 \ No newline at end of file diff --git a/src/i18n/content/jp/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-2.mdx b/src/i18n/content/jp/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-2.mdx new file mode 100644 index 00000000000..f09183977c8 --- /dev/null +++ b/src/i18n/content/jp/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-2.mdx @@ -0,0 +1,13 @@ +--- +subject: Kubernetes integration +releaseDate: '2025-12-23' +version: 3.51.2 +translationType: machine +--- + +変更の詳細な説明については、[リリース ノートを](https://github.com/newrelic/nri-kubernetes/releases/tag/v3.51.2)参照してください。 + +この統合は、次のチャート バージョンに含まれています。 + +* [ニューレリック・インフラストラクチャー 3.56.2](https://github.com/newrelic/nri-kubernetes/releases/tag/newrelic-infrastructure-3.56.2) +* [nri-バンドル-6.0.31](https://github.com/newrelic/helm-charts/releases/tag/nri-bundle-6.0.31) \ No newline at end of file diff --git a/src/i18n/content/jp/docs/release-notes/nrdot-release-notes/nrdot-2025-12-19.mdx b/src/i18n/content/jp/docs/release-notes/nrdot-release-notes/nrdot-2025-12-19.mdx new file mode 100644 index 00000000000..ef4e814dbbd --- /dev/null +++ b/src/i18n/content/jp/docs/release-notes/nrdot-release-notes/nrdot-2025-12-19.mdx @@ -0,0 +1,17 @@ +--- +subject: NRDOT +releaseDate: '2025-12-19' +version: 1.8.0 +metaDescription: Release notes for NRDOT Collector version 1.8.0 +translationType: machine +--- + +## 変更ログ + +### 特徴 + +* 機能: ホテルコンポーネントのバージョンを v0.141.0 から v0.142.0 にアップグレード (#464) + +### バグ修正 + +* 修正: CVE-2025-68156 (#468) を修正するために expr-lang/expr:1.17.7 を強制する \ No newline at end of file diff --git a/src/i18n/content/jp/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx b/src/i18n/content/jp/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx index 71dbefe480b..afb3c5ff1a5 100644 --- a/src/i18n/content/jp/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx +++ b/src/i18n/content/jp/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx @@ -36,7 +36,7 @@ New Relic前述のログ転送計装を使用している顧客に対し、次 - diff --git a/src/i18n/content/jp/docs/service-architecture-intelligence/github-integrations/github-cloud-integration.mdx b/src/i18n/content/jp/docs/service-architecture-intelligence/github-integrations/github-cloud-integration.mdx new file mode 100644 index 00000000000..1423f866654 --- /dev/null +++ b/src/i18n/content/jp/docs/service-architecture-intelligence/github-integrations/github-cloud-integration.mdx @@ -0,0 +1,169 @@ +--- +title: GitHub クラウド統合によるService Architecture Intelligence +tags: + - New Relic integrations + - GitHub integration +metaDescription: 'Learn how to integrate GitHub with New Relic to import repositories, teams, and user data for enhanced service architecture intelligence.' +freshnessValidatedDate: never +translationType: machine +--- + +GitHub 統合は、GitHub 組織からのコンテキストを使用してNew Relicデータを強化することで、 Service Architecture Intelligence強化します。 GitHub アカウントを接続すると、リポジトリ、チーム、プルリクエストのデータをNew Relicにインポートできます。 この追加情報により、[チーム](/docs/service-architecture-intelligence/teams/teams)、[カタログ](/docs/service-architecture-intelligence/catalogs/catalogs)、[スコアカード](/docs/service-architecture-intelligence/scorecards/getting-started)の価値が強化され、エンジニアリング作業のより完全で連携したビューが提供されます。 + +## あなたが始める前に + +**前提条件:** + +* 組織マネージャまたは認証ドメイン マネージャのロールのいずれかが必要です。 + +**サポートされているプラットフォーム:** + +* GitHubクラウド +* GitHub Enterprise Cloud(データレジデンシーなし) + +**サポートされている地域:**米国およびEU地域 + + + * データ レジデンシーを備えた GitHub Enterprise Server および GitHub Enterprise Cloud はサポートされていません。 + * GitHub ユーザー アカウントへのインテグレーションのインストールはサポートされていません。 GitHub ではユーザー レベルでのアプリのインストールが許可されますが、同期プロセスは機能せず、New Relic にデータはインポートされません。 + * GitHub 統合は FedRAMP に準拠していません。 + + +## 同期できるデータ + +GitHub 統合により、 New Relicにインポートするデータ型を選択して、どの情報を同期するかを制御できるようになります。 + +### 利用可能なデータタイプ + +* **リポジトリとプルリクエスト**: リポジトリとプルリクエストのデータをインポートして、コードの可視性とデプロイメントの追跡を改善します。 + +* **チーム**: GitHub チームとそのメンバーシップをインポートして、チーム管理と所有権のマッピングを強化します。 + + + **チーム統合の競合**: チームがすでに別のソース (Okta や他のアイデンティティプロバイダーなど) からNew Relicに統合されている場合、データの競合を防ぐため、GitHub チームを取得して保存することはできません。 この場合、選択できるデータはリポジトリとプルリクエストのデータのみです。\ + **ユーザーのメール表示要件**: チーム メンバーシップが GitHub チームと一致するようにするには、GitHub ユーザーは GitHub プロファイル設定で自分のメール アドレスを公開として設定する必要があります。 プライベートメール設定を持つチーム メンバーは、ユーザー データの同期プロセスから除外されます。 + + +## GitHub 統合のセットアップ + +1. **[one.newrelic.com > + Integration & Agents > GitHub integration](https://one.newrelic.com/marketplace/install-data-source?state=9306060d-b674-b245-083e-ff8d42765e0d)**に移動します。 + +2. インテグレーションを設定したいアカウントを選択します。 + +3. **Set up a new integration** \[新しい統合のセットアップ]を選択し、 **Continue** \[続行]をクリックします。 + +4. **Begin integration** \[統合の開始]画面で: + + a. **Get started in GitHub** \[GitHub で開始する]をクリックしてアカウントを接続します。New Relicオブザーバビリティ アプリが GitHub マーケットプレイスで開きます。 + + b. GitHub 組織内でアプリのインストールを完了します。 インストール後、 New Relicインターフェースにリダイレクトされます。 + + c. **Begin integration** \[再度統合を開始する]を選択し、 **Continue** \[続行]をクリックします。 + + d.**Select your data preferences** \[データ設定を選択]: 同期するデータの種類を選択します。 + + * **Teams + Users**: GitHub チーム構造とユーザー情報をインポートします。 + * **Repositories + Pull Requests**: リポジトリとプルリクエストのデータをインポートします。 + * **Both**: 利用可能なすべてのデータ型をインポートします。 + + e.**Teams + Users** \[Teams + ユーザー]を選択した場合は、すべての GitHub チームのリストが表示されます。 インポートするすべてのチームまたは一部のチームを選択します。 + + f. **Start first sync** \[最初の同期を開始]をクリックして、選択したデータのインポートを開始します。 + + グラム。**Sync started** \[同期が開始されたという]メッセージが表示されたら、 **Continue** \[続行]をクリックします。**Integration status** \[統合ステータス]画面には、選択したデータ タイプ (チーム、リポジトリなど) の数が 5 秒ごとに更新されて表示されます。 すべてのデータが完全にインポートされるまで数分かかります。 + + GitHub integration + +5. *(オプション)* **GitHub integration** \[GitHub 統合]画面で、インポートされたデータにアクセスできます。 + + * **Go to Teams** \[チームに移動]をクリックすると、インポートされたチームが[Teams \[チーム\]](/docs/service-architecture-intelligence/teams/teams)ページに表示されます (セットアップ時にチームを選択した場合)。 + * **Go to Repositories** \[リポジトリに移動]をクリックすると、インポートされたリポジトリ情報が[Repositories \[リポジトリ\]](/docs/service-architecture-intelligence/repositories/repositories)カタログに表示されます (セットアップ中にリポジトリが選択されている場合)。 + +## GitHub 統合を管理する + +GitHub 統合をセットアップしたら、 New Relicインターフェイスを通じて管理できるようになります。 これには、データの更新、設定の編集、必要に応じてアンインストールが含まれます。 + +### 統合管理へのアクセス + +1. **[one.newrelic.com > + Integration & Agents > GitHub integration](https://one.newrelic.com/marketplace/install-data-source?state=9306060d-b674-b245-083e-ff8d42765e0d)**に移動します。 + +2. **Select an action** \[アクションの選択]ステップで、 **Manage your organization** \[組織の管理]を選択し、 **Continue** \[続行]をクリックします。 + + Screenshot showing the manage organization option in GitHub integration + +**「GitHub 統合の管理」画面には、**接続されている組織が現在の同期ステータスとデータ型とともに表示されます。 + +### データを更新 + +データの更新オプションを使用すると、New Relic で GitHub データを効率的に更新できます。 + +**データを更新するには:** + +1. **Manage GitHub integration** \[GitHub 統合の管理]画面から、組織を見つけます。 + +2. 更新する組織の横にある**Refresh data** \[データの更新]をクリックし、 **Continue** \[続行]をクリックします。 + +3. **Refresh Data** \[データの更新]手順で、**Sync on demand** \[オンデマンドで同期を]クリックします。 + +その後、システムは GitHub の権限と組織へのアクセスを検証し、最後の同期以降に新規または変更されたデータのみを取得し、選択したデータ タイプに従って更新されたデータを処理およびマッピングし、最新の同期タイムスタンプとデータ数を反映するために統合ステータスを更新します。 + +**更新されるもの:** + +* チームとそのメンバーシップ +* リポジトリの変更(新規リポジトリ、アーカイブリポジトリ、権限変更) +* カスタムプロパティによるチーム所有権の更新 + + + **更新頻度**: 必要に応じて何度でもデータを更新できます。このプロセスは、組織の規模と選択したデータの種類に応じて、通常数分かかります。 + + +### 統合設定の編集 + +初期セットアップ後に統合設定を変更するには、**Edit** \[編集]オプションを使用します。 GitHub と New Relic の間で同期されるデータの種類や、同期するチームの選択を調整できます。 + +**GitHub 統合を編集するには:** + +1. **Manage GitHub integration** \[GitHub 統合の管理]画面から、組織を見つけます。 + +2. 更新する組織の横にある**Edit** \[編集]をクリックし、 **Continue** \[続行]をクリックします。 + +3. **Edit Integration Settings** \[統合設定の編集]ステップで、必要に応じて選択内容を調整します。 + +4. 更新を適用するには、 **Save changes** \[変更を保存]をクリックします。 + +**編集中に何が起こるか:** + +* 設定変更中も現在のデータはそのまま残ります。 同期するチームの選択が変更された場合、以前の選択は New Relic から削除されませんが、GitHub との同期は行われません。これらのチームは、Teams 機能で削除できます。 +* 新しい設定は以降の同期に適用されます +* 変更を適用する前にプレビューできます +* 変更を保存するまで、統合は以前の設定で実行され続けます。 + +### 自動チーム所有権を設定する + +GitHub でカスタム プロパティとして`teamOwningRepo`追加することで、GitHub リポジトリをチームに自動的に割り当てることができます。 + +組織レベルでカスタム プロパティを作成し、リポジトリ レベルでカスタム プロパティの値を割り当てます。さらに、組織レベルで複数のリポジトリに対して同時にカスタム プロパティを設定することもできます。 + +次に、New Relic Teams で自動所有権機能を有効にし、 `team`タグ キーとして使用するようにします。 + +これを設定すると、各リポジトリが適切なチームに自動的にマッチングされます。 + +カスタム プロパティの作成の詳細については、 [GitHub ドキュメント](https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization)を参照してください。 + +### GitHub 統合をアンインストールする + +GitHub インテグレーションをアンインストールすると、選択した組織からのデータ同期が停止します。 New Relic 内で以前にインポートされたデータを保存するか削除するかを選択するオプションが提供されます。 + +**アンインストールするには:** + +1. **Manage GitHub integration** \[GitHub 統合の管理]画面から、アンインストールする組織を見つけて、 **Uninstall** \[アンインストール]をクリックします。 + +2. 確認ダイアログで、データを保持するか、データを削除するかを選択します。 + +3. 詳細を確認し、「組織のアンインストール」をクリックして確定します。 + +4. アンインストールを確認する成功メッセージが表示されます。 + + + **アンインストール後のデータ保持期間**: 保持されたデータは GitHub と同期されなくなり、後からNew Relicプラットフォーム内で (Teams 機能などを介して) 手動で削除できます。 + \ No newline at end of file diff --git a/src/i18n/content/jp/docs/service-architecture-intelligence/github-integrations/github-enterprise-integration.mdx b/src/i18n/content/jp/docs/service-architecture-intelligence/github-integrations/github-enterprise-integration.mdx new file mode 100644 index 00000000000..6ec1b75a30d --- /dev/null +++ b/src/i18n/content/jp/docs/service-architecture-intelligence/github-integrations/github-enterprise-integration.mdx @@ -0,0 +1,567 @@ +--- +title: GitHub Enterprise によるService Architecture Intelligence (オンプレミス) +tags: + - New Relic integrations + - GitHub Enterprise integration +metaDescription: Integrate your on-premise GitHub Enterprise (GHE) environment with New Relic using a secure collector service and GitHub App for automated data ingestion. +freshnessValidatedDate: never +translationType: machine +--- + + + この機能はまだ開発中ですが、ぜひお試しください。 + + この機能は現在、弊社の[プレリリース ポリシー](/docs/licenses/license-information/referenced-policies/new-relic-pre-release-policy)に従ってプレビュー プログラムの一部として提供されています。 + + +オンプレミスの GitHub Enterprise アカウントのデータを活用して、サービス アーキテクチャーにインサイトをさらに深く組み込むことを検討していますか? New Relic GitHub Enterprise 統合は、プライベート ネットワーク内の安全なコレクター サービス デプロイを使用して、リポジトリとチームをNew Relicプラットフォームに直接インポートします。 + +新しい選択的データ取得機能を使用すると、チーム、リポジトリとプルリクエスト、またはその両方など、どのデータ タイプをインポートするかを正確に選択できます。 この統合 AI モニタリングにより、 New Relic内の[チーム](/docs/service-architecture-intelligence/teams/teams)、[カタログ](/docs/service-architecture-intelligence/catalogs/catalogs)、[スコアカード](/docs/service-architecture-intelligence/scorecards/getting-started)の管理と可視性が強化されます。 詳細については、 [Service Architecture Intelligence機能](/docs/service-architecture-intelligence/getting-started)を参照してください。 + +**前提条件** + +* 組織アドミニストレーター権限を持つ GitHub Enterprise オンプレミス アカウント。 +* GitHub Enterprise ネットワーク内でコレクター サービスを実行するための Docker 環境。 +* 統合を作成するための適切な権限を持つNew Relicアカウント。 + +## セキュリティに関する懸念事項 + +この統合はセキュリティベストプラクティスに従っています: + +* 最小限の権限でGitHub App認証を使用する +* Webhook イベントは秘密鍵を使用して認証されます +* すべてのデータ転送はHTTPS経由で行われます +* ユーザーの資格情報は保存も送信もされません +* リポジトリとチームのデータのみがインポートされます + +**GitHub Enterprise 統合をセットアップするには:** + + + + ## GitHub アプリを作成して設定する + + GHE インスタンスで、 **Settings → Developer Settings → GitHub Apps → New GitHub App**に移動します。GitHub アプリを作成する詳細な手順については、 [GitHub アプリの登録に関する GitHub ドキュメント](https://docs.github.com/en/apps/creating-github-apps/registering-a-github-app/registering-a-github-app)を参照してください。 + + ### 権限を設定する + + アプリの権限を正確に構成して、初期同期中のシームレスなデータ取得と、その後の Webhook イベントの効率的なリッスンを保証します。アプリの権限は、GitHub 上のさまざまなリポジトリおよび組織リソースに対するアプリケーションのアクセス範囲を定義します。これらの権限をカスタマイズすることで、セキュリティを強化し、露出を最小限に抑えながらアプリケーションが必要なデータにのみアクセスできるようにすることができます。適切な設定により、スムーズな初期データ同期と信頼性の高いイベント処理が容易になり、アプリケーションと GitHub のエコシステムとの統合が最適化されます。 + + GitHub アプリの権限に関する詳細なガイダンスについては、 [GitHub アプリの権限設定に関する GitHub ドキュメント](https://docs.github.com/en/apps/creating-github-apps/setting-up-a-github-app/choosing-permissions-for-a-github-app)を参照してください。 + + #### 必要なリポジトリ権限 + + データ同期を有効にするには、次のリポジトリ レベルの権限を示されているとおりに構成します。 + + * **管理**: 読み取り専用 ✓ + * **チェック**: 読み取り専用 ✓ + * **コミットステータス**: 選択済み✓ + * **内容**:選択済み✓ + * **カスタムプロパティ**: 選択済み✓ + * **デプロイメント**: 読み取り専用 ✓ + * **メタデータ**: 読み取り専用(必須)✓ + * **プルリクエスト**: 選択済み✓ + * **Webhooks** : 読み取り専用 ✓ + + #### 必要な組織権限 + + 次の組織レベルの権限を、示されているとおりに構成します。 + + * **管理**: 読み取り専用 ✓ + * **カスタム組織ロール**: 読み取り専用 ✓ + * **カスタムプロパティ**: 読み取り専用 ✓ + * **カスタムリポジトリロール**: 読み取り専用 ✓ + * **イベント**: 読み取り専用 ✓ + * **メンバー**: 読み取り専用 ✓ + * **Webhooks** : 読み取り専用 ✓ + + #### Webhook イベントのサブスクリプション + + リアルタイム同期と監視のために、次の Webhook イベントを示されているとおりに正確に選択します。 + + **✓ 次のイベントを選択します:** + + * `check_run` - 実行ステータスの更新を確認する + * `check_suite` - スイートの完了を確認する + * `commit_comment` - コミットへのコメント + * `create` - ブランチまたはタグの作成 + * `custom_property` - チーム割り当てのカスタムプロパティの変更 + * `custom_property_values` - カスタムプロパティ値の変更 + * `delete` - ブランチまたはタグの削除 + * `deployment` - デプロイメント活動 + * `deployment_review` - デプロイメントのレビュープロセス + * `deployment_status` - デプロイメントステータスの更新 + * `fork` - リポジトリフォークイベント + * `installation_target` - GitHub アプリのインストレーションの変更 + * `label` - 問題とプルリクエストのラベル変更 + * `member` - メンバープロフィールの変更 + * `membership` - メンバーの追加と削除 + * `meta` - GitHub アプリのメタデータの変更 + * `milestone` - マイルストーンの変更 + * `organization` - 組織レベルの変更 + * `public` - リポジトリの可視性の変更 + * `pull_request` - プルリクエスト活動 + * `pull_request_review` - プルリクエストのレビュー活動 + * `pull_request_review_comment` - コメント活動のレビュー + * `pull_request_review_thread` - プルリクエストレビュースレッドのアクティビティ + * `push` - コードのプッシュとコミット + * `release` - 出版物とアップデートのリリース + * `repository` - リポジトリの作成、削除、変更 + * `star` - リポジトリスターイベント + * `status` - コミットステータスの更新 + * `team` - チームの作成と変更 + * `team_add` - チームメンバーの追加 + * `watch` - リポジトリ監視イベント + + + **セキュリティのベストプラクティス**: セキュリティの危険を軽減するには、最小特権アクセスの原則に従い、統合のニーズに必要な最小限のアクセス許可のみを有効にします。 + + + ### Webhookを設定する + + Webhook URL を設定し、安全な通信のためにカスタムイベント シークレットを作成します。 + + * **Webhook URL** : コレクター サービスのデプロイメントに基づいて次の形式を使用します。 + + * HTTPの場合: `http://your-domain-name/github/sync/webhook` + * HTTPSの場合: `https://your-domain-name/github/sync/webhook` + + **例**: コレクター サービスが`collector.yourcompany.com`にデプロイされている場合、Webhook URL は次のようになります。 `https://collector.yourcompany.com/github/sync/webhook` + + * **イベント シークレット**: Webhook 認証用の安全なランダム文字列 (32 文字以上) を生成します。この値は`GITHUB_APP_WEBHOOK_SECRET`環境変数に必要となるため保存してください。 + + ### キーの生成と変換 + + 1. GitHub アプリを作成したら、秘密鍵を生成する必要があります。GitHub アプリの設定で、 **Generate a private key** \[秘密鍵を生成]をクリックします。アプリは、一意のアプリ ID と秘密キー ファイル (.pem 形式) を自動的に生成してダウンロードします。これらはコレクターサービスの設定に必要となるため、安全に保存してください。 + + 2. ダウンロードした秘密鍵ファイルを DER 形式に変換し、Base64 でエンコードします。 + + **ステップ1: .pemをDER形式に変換する** + + ```bash + openssl rsa -outform der -in private-key.pem -out output.der + ``` + + **ステップ2: DERファイルをBase64でエンコードする** + + ```bash + # For Linux/macOS + base64 -i output.der -o outputBase64 + cat outputBase64 # Copy this output + + # For Windows (using PowerShell) + [Convert]::ToBase64String([IO.File]::ReadAllBytes("output.der")) + + # Alternative for Windows (using certutil) + certutil -encode output.der temp.b64 && findstr /v /c:- temp.b64 + ``` + + 結果の Base64 文字列をコピーし、コレクター設定の`GITHUB_APP_PRIVATE_KEY`環境変数の値として使用します。 + + **✓ 成功指標:** + + * Githubアプリが正常に作成されました + * アプリIDと秘密鍵は安全に保存されます + * Webhook URLが設定され、アクセス可能 + + + + ## 環境変数を準備する + + コレクター サービスをデプロイする前に、次の情報を収集してください。 + + ### 必要な環境変数 + +
+ Solution
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ 変数 + + ソース + + 入手方法 +
+ `NR_API_KEY` + + ニューレリック + + New RelicダッシュボードからAPIキーを生成します。 +
+ `NR_LICENSE_KEY` + + ニューレリック + + New Relicダッシュボードからライセンスキーを生成します。 +
+ `GHE_BASE_URL` + + GHEサーバー + + GHE サーバーのベース URL (例: + + `https://source.datanot.us` + + )。 +
+ `GITHUB_APP_ID` + + GitHubアプリ + + GitHub アプリを作成したときに生成された一意のアプリ ID。 +
+ `GITHUB_APP_PRIVATE_KEY` + + GitHubアプリ + + 秘密鍵 ( + + `.pem` + + ) ファイルの内容が Base64 文字列に変換されました。変換手順については手順 1 を参照してください。 +
+ `GITHUB_APP_WEBHOOK_SECRET` + + GitHubアプリ + + GitHub アプリの作成時に設定したカスタムイベント シークレットの値。 +
+ + ### オプションのSSL環境変数 + + 以下は、API を HTTPS にするためのオプションの環境変数です。 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ オプション変数 + + ソース + + 入手方法 +
+ `SERVER_SSL_KEY_STORE` + + SSL設定 + + HTTPS設定用のSSLキーストアファイルへのパス。 以下のSSL証明書の設定手順を参照してください。 +
+ `SERVER_SSL_KEY_STORE_PASSWORD` + + SSL設定 + + SSL キーストア ファイルのパスワード。これは、PKCS12 キーストアを作成するときに設定したパスワードです。 +
+ `SERVER_SSL_KEY_STORE_TYPE` + + SSL設定 + + SSL キーストアのタイプ (例: PKCS12、JKS)。以下の SSL セットアップ手順に従う場合は、PKCS12 を使用してください。 +
+ `SERVER_SSL_KEY_ALIAS` + + SSL設定 + + キーストア内の SSL キーのエイリアス。これは、キーストアを作成するときに指定する名前です。 +
+ `SERVER_PORT` + + SSL設定 + + HTTPS 通信用のサーバー ポート。HTTPSの場合は8443を使用します。 +
+ + ### SSL証明書の設定手順 + + HTTPS 設定の信頼された認証局 (CA) から SSL 証明書を取得するには、次の手順に従います。 + + 1. **秘密鍵と証明書署名要求 (CSR) を生成します**。 + + ```bash + openssl req -new -newkey rsa:2048 -nodes -keyout mycert.key -out mycert.csr + ``` + + 2. **選択した CA に CSR を送信します**: 選択した証明機関 (DigiCert、Let's Encrypt、GoDaddy など) に`mycert.csr`ファイルを送信します。 + + 3. **ドメイン検証を完了する**: CA の指示に従って、必要なドメイン検証手順を完了します。 + + 4. **証明書をダウンロード**: CA から発行された証明書ファイル (通常は`.crt`または`.pem`ファイル) をダウンロードします。 + + 5. **PKCS12 キーストアを作成する**: 証明書と秘密鍵を PKCS12 キーストアに結合します。 + + ```bash + openssl pkcs12 -export -in mycert.crt -inkey mycert.key -out keystore.p12 -name mycert + ``` + + 6. **キーストアの使用**: 生成された`keystore.p12`ファイルをDocker設定の`SERVER_SSL_KEY_STORE`の値として使用します。 + + + + ## コレクターサービスをデプロイする + + コレクター サービスは Docker イメージとして提供されます。デプロイメントは、次の 2 つの方法のいずれかで実行できます。 + + ### オプション A: Docker Compose を使用する (推奨) + + サービスのダウンロードとデプロイメントを自動化するDocker Compose ファイルを作成します。 + + 1. 次の内容の`docker-compose.yml`ファイルを作成します。 + + ```yaml + version: '3.9' + + services: + nr-ghe-collector: + image: newrelic/nr-ghe-collector:tag # use latest tag available in dockerhub starting with v* + container_name: nr-ghe-collector + restart: unless-stopped + ports: + - "8080:8080" # HTTP port, make 8443 in case of HTTPS + environment: + # Required environment variables + - NR_API_KEY=${NR_API_KEY:-DEFAULT_VALUE} + - NR_LICENSE_KEY=${NR_LICENSE_KEY:-DEFAULT_VALUE} + - GHE_BASE_URL=${GHE_BASE_URL:-DEFAULT_VALUE} + - GITHUB_APP_ID=${GITHUB_APP_ID:-DEFAULT_VALUE} + - GITHUB_APP_PRIVATE_KEY=${GITHUB_APP_PRIVATE_KEY:-DEFAULT_VALUE} + - GITHUB_APP_WEBHOOK_SECRET=${GITHUB_APP_WEBHOOK_SECRET:-DEFAULT_VALUE} + + # Optional SSL environment variables (uncomment and configure if using HTTPS) + # - SERVER_SSL_KEY_STORE=${SERVER_SSL_KEY_STORE} + # - SERVER_SSL_KEY_STORE_PASSWORD=${SERVER_SSL_KEY_STORE_PASSWORD} + # - SERVER_SSL_KEY_STORE_TYPE=${SERVER_SSL_KEY_STORE_TYPE} + # - SERVER_SSL_KEY_ALIAS=${SERVER_SSL_KEY_ALIAS} + # - SERVER_PORT=8443 + #volumes: # Uncomment the line below if using SSL keystore + # - ./keystore.p12:/app/keystore.p12 # path to your keystore file + network_mode: bridge + + networks: + nr-network: + driver: bridge + ``` + + 2. Docker Compose ファイル内の`DEFAULT_VALUE`プレースホルダーを実際の値に置き換えて環境変数を設定するか、コマンドを実行する前にシステムに環境変数を作成してください。 + + + 秘密を含む環境ファイルをバージョン管理にコミットしないでください。運用環境では安全な秘密管理プラクティスを使用します。 + + + 3. サービスを開始するには、次のコマンドを実行します。 + + ```bash + docker-compose up -d + ``` + + ### オプションB: Dockerイメージの直接実行 + + Dockerイメージを[Docker Hub レジストリ](https://hub.docker.com/r/newrelic/nr-ghe-collector)から直接ダウンロードし、組織の推奨するCI/CDパイプラインまたはデプロイメント方法を使用して実行できます。 カスタマーは、コレクター サービスの開始時に、上記のすべての環境変数を渡す必要があることに注意してください。 + + **✓ 成功指標:** + + * Collectorサービスは実行されており、構成されたポートでアクセス可能です + * Dockerコンテナのログにエラーなしの正常な起動が表示される + * サービスはヘルスチェックに応答します(設定されている場合) + + + + ## 組織にGitHubアプリをインストールする + + コレクター サービスが実行されたら、統合する特定の組織に GitHub アプリをインストールする必要があります。 + + 1. GitHub Enterprise インスタンスに移動します。 + 2. **Settings** → **Developer Settings** → **GitHub Apps**に移動します。 + 3. 手順 1 で作成した GitHub アプリを見つけてクリックします。 + 4. 左側のサイドバーで、 **Install App** \[アプリをインストール] をクリックします。 + 5. アプリをインストールする組織を選択します。 + 6. すべてのリポジトリにインストールするか、特定のリポジトリを選択するかを選択します。 + 7. **Install** \[インストール]をクリックしてインストールを完了します。 + + **✓ 成功指標:** + + * Webhookの配信はGitHubアプリの設定に表示される + * コレクターサービスログに認証エラーはありません + + + + ## New Relic UIで統合セットアップを完了する + + コレクター サービスが実行され、GitHub アプリが GHE 組織にインストールされたら、 New Relic UIの指示に従って統合セットアップを完了します。 + + 1. 対応する GHE 組織が New Relic UI に表示されます。 + + 2. 初期データ同期を開始するには、 **First time sync** \[初回同期]をクリックします。 + + 3. *(オプション)*データを手動で同期するには、 **On-demand sync** \[オンデマンド同期]をクリックします。 + + + 4 時間ごとにデータを手動で同期できます。 過去 4 時間以内に同期が行われた場合、**On-demand sync** \[オンデマンド同期]ボタンは無効のままになります。 + + + 4. 同期が開始されたというメッセージが表示されたら、 **Continue** \[続行]をクリックします。**GitHub Enterprise Integration** \[GitHub Enterprise連携]画面には、チーム数とリポジトリ数が表示され、5秒ごとに更新されます。すべてのデータの完全なインポートには 15 ~ 30 分かかります (時間はリポジトリの数によって異なります)。 + + GitHub Enterprise Integration dashboard showing integration progress + + ### データの表示 + + **GitHub Enterprise Integration** \[GitHub Enterprise 統合]画面で: + + * [Teams](/docs/service-architecture-intelligence/teams/teams)にインポートされたチーム情報を表示するには、 **Go to Teams** \[Teams に移動]をクリックします。 + * インポートされたリポジトリ情報を[Catalogs](/docs/service-architecture-intelligence/catalogs/catalogs) \[カタログ]で表示するには、 **Go to Repositories** \[リポジトリに移動]をクリックします。 + + + + ## チームの割り当てを構成する(オプション) + + GitHub Enterprise でカスタム プロパティとして`teamOwningRepo`追加することで、GitHub リポジトリをチームに自動的に割り当てることができます。 + + 1. 組織レベルでカスタム プロパティを作成し、リポジトリ レベルでカスタム プロパティの値を割り当てます。さらに、組織レベルで複数のリポジトリに対して同時にカスタム プロパティを設定することもできます。 + 2. 次に、New Relic Teams で[自動所有権](/docs/service-architecture-intelligence/teams/manage-teams/#assign-ownership)機能を有効にし、 `team`タグ キーとして使用するようにします。 + + これを設定すると、New Relic は各リポジトリを適切なチームに自動的に一致させます。 + + カスタム プロパティの作成の詳細については、 [GitHub ドキュメント](https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization)を参照してください。 + + + +## トラブルシューティング + +### よくある問題と解決策 + +**Webhook 配信の失敗:** + +* コレクターサービスが実行されており、GitHub Enterpriseからアクセスできることを確認します。 +* ファイアウォールの設定とネットワーク接続を確認する + +**認証エラー:** + +* GitHub App IDと秘密鍵が正しく設定されていることを確認する +* 秘密鍵がDER形式に正しく変換され、Base64でエンコードされていることを確認する +* GitHub Appとコレクター設定間でWebhookシークレットが一致していることを確認します + +**同期の失敗:** + +* GitHub アプリに必要な権限があることを確認する +* アプリが正しい組織にインストールされていることを確認する +* 特定のエラーメッセージについては、コレクター サービス ログを確認してください。 + +**ネットワーク接続の問題:** + +* コレクター サービスが GitHub Enterprise インスタンスに到達できることを確認する +* HTTPSを使用している場合はSSL証明書が適切に設定されていることを確認します +* GitHub Enterprise ドメインの DNS 解決を確認する + +## アンインストール + +GitHub Enterprise 統合をアンインストールするには: + +1. GitHub Enterprise UI に移動します。 +2. アプリがインストールされている組織の設定に移動します。 +3. GitHub Enterprise インターフェースから直接 GitHub App をアンインストールします。このアクションにより、バックエンド プロセスがトリガーされ、データ収集が停止されます。 +4. Docker 環境からコレクター サービスを停止して削除します。 \ No newline at end of file diff --git a/src/i18n/content/kr/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent.mdx b/src/i18n/content/kr/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent.mdx index 72f1923aea7..72b59c12d1a 100644 --- a/src/i18n/content/kr/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent.mdx +++ b/src/i18n/content/kr/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent.mdx @@ -141,7 +141,7 @@ Java 에이전트를 설치하기 전에 시스템이 다음 요구 사항을 * 최신 버전으로 1.3.1 스프레이 * Tomcat 7.0.0-최신 버전 * Undertow 1.1.0.Final부터 최신까지 - * WebLogic 12.1.2.1-12.2.x (독점적인) + * WebLogic 12.1.2.1 - 14.1.1 * WebSphere 8-9(독점) * WebSphere Liberty 8.5 최신 버전 * Wildfly 8.0.0.Final부터 최신까지 @@ -482,4 +482,4 @@ Java 에이전트는 다른 New Relic 제품과 통합되어 엔드 투 엔드 - \ No newline at end of file + diff --git a/src/i18n/content/kr/docs/cci/azure-cci.mdx b/src/i18n/content/kr/docs/cci/azure-cci.mdx index f4177c23821..3a061865ceb 100644 --- a/src/i18n/content/kr/docs/cci/azure-cci.mdx +++ b/src/i18n/content/kr/docs/cci/azure-cci.mdx @@ -395,7 +395,7 @@ Azure 클라우드 비용 인텔리전스에 연결하기 전에 다음 사항 - 컨테이너 내에서 청구 데이터가 저장되는 기본 경로를 입력합니다(예: 2025년 10월의 경우 `20251001-20251031` ). **참고**: 청구 내보내기가 컨테이너 루트에 직접 게시되는 경우 이 필드를 비워 두세요. + 청구 데이터가 저장되는 컨테이너의 상대 경로를 월별 형식으로 입력하세요(예: 2025년 11월은 `20251101-20251130`, 2025년 12월은 `20251201-20251231`). **참고**: 청구 정보 내보내기가 컨테이너의 루트에 직접 게시되는 경우 이 필드를 비워 두십시오. diff --git a/src/i18n/content/kr/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx b/src/i18n/content/kr/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx index ffdff4b3a22..7c438672dff 100644 --- a/src/i18n/content/kr/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx +++ b/src/i18n/content/kr/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx @@ -26,28 +26,26 @@ Snowflake 통합을 통해 쿼리 성능, 스토리지 시스템 상태, 창고 ## Snowflake 지표 설정 - 아래 명령을 실행하여 Snowflake 지수를 JSON 형식으로 저장하면 nri-flex에서 읽을 수 있습니다. ACCOUNT, USERNAME 및 SNOWSQL\_PWD를 적절하게 수정하십시오. + 아래 명령어를 실행하여 Snowflake 메트릭을 JSON 형식으로 저장하면 nri-flex에서 이를 읽을 수 있습니다. `ACCOUNT`, `USERNAME`, `SNOWSQL_PWD` 를 적절히 수정하십시오. ```shell - - # Run the below command as a 1 minute cronjob - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SERVICE_TYPE, NAME, AVG(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_AVERAGE", SUM(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_SUM", AVG(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_AVERAGE", SUM(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_SUM", AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."METERING_HISTORY" WHERE start_time >= DATE_TRUNC(day, CURRENT_DATE()) GROUP BY 1, 2;' > /tmp/snowflake-account-metering.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, AVG(AVG_RUNNING) AS "RUNNING_AVERAGE", AVG(AVG_QUEUED_LOAD) AS "QUEUED_LOAD_AVERAGE", AVG(AVG_QUEUED_PROVISIONING) AS "QUEUED_PROVISIONING_AVERAGE", AVG(AVG_BLOCKED) AS "BLOCKED_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_LOAD_HISTORY" GROUP BY 1;' > /tmp/snowflake-warehouse-load-history-metrics.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, avg(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_AVERAGE", sum(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_SUM", avg(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_AVERAGE", sum(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_SUM", avg(CREDITS_USED) as "CREDITS_USED_AVERAGE", sum(CREDITS_USED) as "CREDITS_USED_SUM" from "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" group by 1;' > /tmp/snowflake-warehouse-metering.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT table_name, table_schema, avg(ACTIVE_BYTES) as "ACTIVE_BYTES_AVERAGE", avg(TIME_TRAVEL_BYTES) as "TIME_TRAVEL_BYTES_AVERAGE", avg(FAILSAFE_BYTES) as "FAILSAFE_BYTES_AVERAGE", avg(RETAINED_FOR_CLONE_BYTES) as "RETAINED_FOR_CLONE_BYTES_AVERAGE" from "SNOWFLAKE"."ACCOUNT_USAGE"."TABLE_STORAGE_METRICS" group by 1, 2;' > /tmp/snowflake-table-storage-metrics.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT STORAGE_BYTES, STAGE_BYTES, FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STORAGE_USAGE" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-storage-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT USAGE_DATE, AVG(AVERAGE_STAGE_BYTES) FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STAGE_STORAGE_USAGE_HISTORY" GROUP BY USAGE_DATE;' > /tmp/snowflake-stage-storage-usage-history.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."REPLICATION_USAGE_HISTORY" GROUP BY DATABASE_NAME;' > /tmp/snowflake-replication-usage-history.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(EXECUTION_TIME) AS "EXECUTION_TIME_AVERAGE", AVG(COMPILATION_TIME) AS "COMPILATION_TIME_AVERAGE", AVG(BYTES_SCANNED) AS "BYTES_SCANNED_AVERAGE", AVG(BYTES_WRITTEN) AS "BYTES_WRITTEN_AVERAGE", AVG(BYTES_DELETED) AS "BYTES_DELETED_AVERAGE", AVG(BYTES_SPILLED_TO_LOCAL_STORAGE) AS "BYTES_SPILLED_TO_LOCAL_STORAGE_AVERAGE", AVG(BYTES_SPILLED_TO_REMOTE_STORAGE) AS "BYTES_SPILLED_TO_REMOTE_STORAGE_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" GROUP BY QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME;' > /tmp/snowflake-query-history.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT PIPE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_INSERTED) AS "BYTES_INSERTED_AVERAGE", SUM(BYTES_INSERTED) AS "BYTES_INSERTED_SUM", AVG(FILES_INSERTED) AS "FILES_INSERTED_AVERAGE", SUM(FILES_INSERTED) AS "FILES_INSERTED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."PIPE_USAGE_HISTORY" GROUP BY PIPE_NAME;' > /tmp/snowflake-pipe-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_ID, QUERY_TEXT, (EXECUTION_TIME / 60000) AS EXEC_TIME, WAREHOUSE_NAME, USER_NAME, EXECUTION_STATUS FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" WHERE EXECUTION_STATUS = '\''SUCCESS'\'' ORDER BY EXECUTION_TIME DESC;' > /tmp/snowflake-longest-queries.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT EVENT_ID, EVENT_TIMESTAMP, EVENT_TYPE, REPORTED_CLIENT_TYPE, REPORTED_CLIENT_VERSION, FIRST_AUTHENTICATION_FACTOR, SECOND_AUTHENTICATION_FACTOR, IS_SUCCESS, ERROR_CODE, ERROR_MESSAGE FROM "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY" WHERE IS_SUCCESS = '\''NO'\'';' > /tmp/snowflake-login-failures.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVERAGE_DATABASE_BYTES, AVERAGE_FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATABASE_STORAGE_USAGE_HISTORY" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-database-storage-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SOURCE_CLOUD, SOURCE_REGION, TARGET_CLOUD, TARGET_REGION, TRANSFER_TYPE, AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATA_TRANSFER_HISTORY" GROUP BY 1, 2, 3, 4, 5;' > /tmp/snowflake-data-transfer-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, SUM(CREDITS_USED) AS TOTAL_CREDITS_USED FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" GROUP BY 1 ORDER BY 2 DESC;' > /tmp/snowflake-credit-usage-by-warehouse.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT TABLE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_AVERAGE", SUM(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_SUM", AVG(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_AVERAGE", SUM(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."AUTOMATIC_CLUSTERING_HISTORY" GROUP BY 1, 2, 3;' > /tmp/snowflake-automatic-clustering.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'select USER_NAME,EVENT_TYPE,IS_SUCCESS,ERROR_CODE,ERROR_MESSAGE,FIRST_AUTHENTICATION_FACTOR,SECOND_AUTHENTICATION_FACTOR from "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY";' > /tmp/snowflake-account-details.json - + # Run the below command as a 1 minute cronjob + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SERVICE_TYPE, NAME, AVG(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_AVERAGE", SUM(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_SUM", AVG(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_AVERAGE", SUM(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_SUM", AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."METERING_HISTORY" WHERE start_time >= DATE_TRUNC(day, CURRENT_DATE()) GROUP BY 1, 2;' > /tmp/snowflake-account-metering.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, AVG(AVG_RUNNING) AS "RUNNING_AVERAGE", AVG(AVG_QUEUED_LOAD) AS "QUEUED_LOAD_AVERAGE", AVG(AVG_QUEUED_PROVISIONING) AS "QUEUED_PROVISIONING_AVERAGE", AVG(AVG_BLOCKED) AS "BLOCKED_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_LOAD_HISTORY" GROUP BY 1;' > /tmp/snowflake-warehouse-load-history-metrics.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, avg(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_AVERAGE", sum(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_SUM", avg(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_AVERAGE", sum(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_SUM", avg(CREDITS_USED) as "CREDITS_USED_AVERAGE", sum(CREDITS_USED) as "CREDITS_USED_SUM" from "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" group by 1;' > /tmp/snowflake-warehouse-metering.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT table_name, table_schema, avg(ACTIVE_BYTES) as "ACTIVE_BYTES_AVERAGE", avg(TIME_TRAVEL_BYTES) as "TIME_TRAVEL_BYTES_AVERAGE", avg(FAILSAFE_BYTES) as "FAILSAFE_BYTES_AVERAGE", avg(RETAINED_FOR_CLONE_BYTES) as "RETAINED_FOR_CLONE_BYTES_AVERAGE" from "SNOWFLAKE"."ACCOUNT_USAGE"."TABLE_STORAGE_METRICS" group by 1, 2;' > /tmp/snowflake-table-storage-metrics.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT STORAGE_BYTES, STAGE_BYTES, FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STORAGE_USAGE" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-storage-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT USAGE_DATE, AVG(AVERAGE_STAGE_BYTES) FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STAGE_STORAGE_USAGE_HISTORY" GROUP BY USAGE_DATE;' > /tmp/snowflake-stage-storage-usage-history.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."REPLICATION_USAGE_HISTORY" GROUP BY DATABASE_NAME;' > /tmp/snowflake-replication-usage-history.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(EXECUTION_TIME) AS "EXECUTION_TIME_AVERAGE", AVG(COMPILATION_TIME) AS "COMPILATION_TIME_AVERAGE", AVG(BYTES_SCANNED) AS "BYTES_SCANNED_AVERAGE", AVG(BYTES_WRITTEN) AS "BYTES_WRITTEN_AVERAGE", AVG(BYTES_DELETED) AS "BYTES_DELETED_AVERAGE", AVG(BYTES_SPILLED_TO_LOCAL_STORAGE) AS "BYTES_SPILLED_TO_LOCAL_STORAGE_AVERAGE", AVG(BYTES_SPILLED_TO_REMOTE_STORAGE) AS "BYTES_SPILLED_TO_REMOTE_STORAGE_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" GROUP BY QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME;' > /tmp/snowflake-query-history.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT PIPE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_INSERTED) AS "BYTES_INSERTED_AVERAGE", SUM(BYTES_INSERTED) AS "BYTES_INSERTED_SUM", AVG(FILES_INSERTED) AS "FILES_INSERTED_AVERAGE", SUM(FILES_INSERTED) AS "FILES_INSERTED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."PIPE_USAGE_HISTORY" GROUP BY PIPE_NAME;' > /tmp/snowflake-pipe-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_ID, QUERY_TEXT, (EXECUTION_TIME / 60000) AS EXEC_TIME, WAREHOUSE_NAME, USER_NAME, EXECUTION_STATUS FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" WHERE EXECUTION_STATUS = '\''SUCCESS'\'' ORDER BY EXECUTION_TIME DESC;' > /tmp/snowflake-longest-queries.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT EVENT_ID, EVENT_TIMESTAMP, EVENT_TYPE, REPORTED_CLIENT_TYPE, REPORTED_CLIENT_VERSION, FIRST_AUTHENTICATION_FACTOR, SECOND_AUTHENTICATION_FACTOR, IS_SUCCESS, ERROR_CODE, ERROR_MESSAGE FROM "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY" WHERE IS_SUCCESS = '\''NO'\'';' > /tmp/snowflake-login-failures.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVERAGE_DATABASE_BYTES, AVERAGE_FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATABASE_STORAGE_USAGE_HISTORY" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-database-storage-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SOURCE_CLOUD, SOURCE_REGION, TARGET_CLOUD, TARGET_REGION, TRANSFER_TYPE, AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATA_TRANSFER_HISTORY" GROUP BY 1, 2, 3, 4, 5;' > /tmp/snowflake-data-transfer-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, SUM(CREDITS_USED) AS TOTAL_CREDITS_USED FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" GROUP BY 1 ORDER BY 2 DESC;' > /tmp/snowflake-credit-usage-by-warehouse.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT TABLE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_AVERAGE", SUM(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_SUM", AVG(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_AVERAGE", SUM(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."AUTOMATIC_CLUSTERING_HISTORY" GROUP BY 1, 2, 3;' > /tmp/snowflake-automatic-clustering.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'select USER_NAME,EVENT_TYPE,IS_SUCCESS,ERROR_CODE,ERROR_MESSAGE,FIRST_AUTHENTICATION_FACTOR,SECOND_AUTHENTICATION_FACTOR from "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY";' > /tmp/snowflake-account-details.json ``` @@ -59,130 +57,126 @@ Snowflake 통합을 통해 쿼리 성능, 스토리지 시스템 상태, 창고 1. 통합 디렉터리에 `nri-snowflake-config.yml` 이라는 파일을 만듭니다. ```shell - - touch /etc/newrelic-infra/integrations.d/nri-snowflake-config.yml - + touch /etc/newrelic-infra/integrations.d/nri-snowflake-config.yml ``` 2. 에이전트가 Snowflake 데이터를 캡처할 수 있도록 하려면 다음 스니펫을 `nri-snowflake-config.yml` 파일에 추가하세요. ```yml - - --- - integrations: - - name: nri-flex - interval: 30s - config: - name: snowflakeAccountMetering - apis: - - name: snowflakeAccountMetering - file: /tmp/snowflake-account-metering.json - - name: nri-flex - interval: 30s - config: - name: snowflakeWarehouseLoadHistory - apis: - - name: snowflakeWarehouseLoadHistory - file: /tmp/snowflake-warehouse-load-history-metrics.json - - name: nri-flex - interval: 30s - config: - name: snowflakeWarehouseMetering - apis: - - name: snowflakeWarehouseMetering - file: /tmp/snowflake-warehouse-metering.json - - name: nri-flex - interval: 30s - config: - name: snowflakeTableStorage - apis: - - name: snowflakeTableStorage - file: /tmp/snowflake-table-storage-metrics.json - - name: nri-flex - interval: 30s - config: - name: snowflakeStageStorageUsage - apis: - - name: snowflakeStageStorageUsage - file: /tmp/snowflake-stage-storage-usage-history.json - - name: nri-flex - interval: 30s - config: - name: snowflakeReplicationUsgae - apis: - - name: snowflakeReplicationUsgae - file: /tmp/snowflake-replication-usage-history.json - - name: nri-flex - interval: 30s - config: - name: snowflakeQueryHistory - apis: - - name: snowflakeQueryHistory - file: /tmp/snowflake-query-history.json - - name: nri-flex - interval: 30s - config: - name: snowflakePipeUsage - apis: - - name: snowflakePipeUsage - file: /tmp/snowflake-pipe-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeLongestQueries - apis: - - name: snowflakeLongestQueries - file: /tmp/snowflake-longest-queries.json - - name: nri-flex - interval: 30s - config: - name: snowflakeLoginFailure - apis: - - name: snowflakeLoginFailure - file: /tmp/snowflake-login-failures.json - - name: nri-flex - interval: 30s - config: - name: snowflakeDatabaseStorageUsage - apis: - - name: snowflakeDatabaseStorageUsage - file: /tmp/snowflake-database-storage-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeDataTransferUsage - apis: - - name: snowflakeDataTransferUsage - file: /tmp/snowflake-data-transfer-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeCreditUsageByWarehouse - apis: - - name: snowflakeCreditUsageByWarehouse - file: /tmp/snowflake-credit-usage-by-warehouse.json - - name: nri-flex - interval: 30s - config: - name: snowflakeAutomaticClustering - apis: - - name: snowflakeAutomaticClustering - file: /tmp/snowflake-automatic-clustering.json - - name: nri-flex - interval: 30s - config: - name: snowflakeStorageUsage - apis: - - name: snowflakeStorageUsage - file: /tmp/snowflake-storage-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeAccountDetails - apis: - - name: snowflakeAccountDetails - file: /tmp/snowflake-account-details.json - + --- + integrations: + - name: nri-flex + interval: 30s + config: + name: snowflakeAccountMetering + apis: + - name: snowflakeAccountMetering + file: /tmp/snowflake-account-metering.json + - name: nri-flex + interval: 30s + config: + name: snowflakeWarehouseLoadHistory + apis: + - name: snowflakeWarehouseLoadHistory + file: /tmp/snowflake-warehouse-load-history-metrics.json + - name: nri-flex + interval: 30s + config: + name: snowflakeWarehouseMetering + apis: + - name: snowflakeWarehouseMetering + file: /tmp/snowflake-warehouse-metering.json + - name: nri-flex + interval: 30s + config: + name: snowflakeTableStorage + apis: + - name: snowflakeTableStorage + file: /tmp/snowflake-table-storage-metrics.json + - name: nri-flex + interval: 30s + config: + name: snowflakeStageStorageUsage + apis: + - name: snowflakeStageStorageUsage + file: /tmp/snowflake-stage-storage-usage-history.json + - name: nri-flex + interval: 30s + config: + name: snowflakeReplicationUsgae + apis: + - name: snowflakeReplicationUsgae + file: /tmp/snowflake-replication-usage-history.json + - name: nri-flex + interval: 30s + config: + name: snowflakeQueryHistory + apis: + - name: snowflakeQueryHistory + file: /tmp/snowflake-query-history.json + - name: nri-flex + interval: 30s + config: + name: snowflakePipeUsage + apis: + - name: snowflakePipeUsage + file: /tmp/snowflake-pipe-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeLongestQueries + apis: + - name: snowflakeLongestQueries + file: /tmp/snowflake-longest-queries.json + - name: nri-flex + interval: 30s + config: + name: snowflakeLoginFailure + apis: + - name: snowflakeLoginFailure + file: /tmp/snowflake-login-failures.json + - name: nri-flex + interval: 30s + config: + name: snowflakeDatabaseStorageUsage + apis: + - name: snowflakeDatabaseStorageUsage + file: /tmp/snowflake-database-storage-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeDataTransferUsage + apis: + - name: snowflakeDataTransferUsage + file: /tmp/snowflake-data-transfer-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeCreditUsageByWarehouse + apis: + - name: snowflakeCreditUsageByWarehouse + file: /tmp/snowflake-credit-usage-by-warehouse.json + - name: nri-flex + interval: 30s + config: + name: snowflakeAutomaticClustering + apis: + - name: snowflakeAutomaticClustering + file: /tmp/snowflake-automatic-clustering.json + - name: nri-flex + interval: 30s + config: + name: snowflakeStorageUsage + apis: + - name: snowflakeStorageUsage + file: /tmp/snowflake-storage-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeAccountDetails + apis: + - name: snowflakeAccountDetails + file: /tmp/snowflake-account-details.json ``` @@ -192,9 +186,7 @@ Snowflake 통합을 통해 쿼리 성능, 스토리지 시스템 상태, 창고 인프라 에이전트를 다시 시작하십시오. ```shell - sudo systemctl restart newrelic-infra.service - ``` 몇 분 안에 애플리케이션이 메트릭을 [one.newrelic.com](https://one.newrelic.com)으로 보냅니다. @@ -215,9 +207,7 @@ Snowflake 통합을 통해 쿼리 성능, 스토리지 시스템 상태, 창고 다음은 Snowflake 지표를 확인하는 NRQL 쿼리입니다. ```sql - - SELECT * from snowflakeAccountSample - + SELECT * FROM snowflakeAccountSample ``` diff --git a/src/i18n/content/kr/docs/logs/forward-logs/azure-log-forwarding.mdx b/src/i18n/content/kr/docs/logs/forward-logs/azure-log-forwarding.mdx index dd3a260f241..72a66515995 100644 --- a/src/i18n/content/kr/docs/logs/forward-logs/azure-log-forwarding.mdx +++ b/src/i18n/content/kr/docs/logs/forward-logs/azure-log-forwarding.mdx @@ -36,19 +36,80 @@ Event Hub에서 로그를 보내려면: 이 단계를 따르세요: 1. 있는지 확인하세요. + 2. **[one.newrelic.com](https://one.newrelic.com/launcher/logger.log-launcher)** 의 왼쪽 탐색 메뉴에서 **Integrations & Agents** 클릭합니다. + 3. **Logging** 카테고리의 데이터 소스 목록에서 **Microsoft Azure Event Hub** 타일을 클릭합니다. + 4. 로그를 전송할 계정을 선택하고 **Continue** 클릭합니다. + 5. **Generate API key** 클릭하고 생성된 API 키를 복사합니다. + 6. **Deploy to Azure** 클릭하면 Azure에 로드된 ARM 템플릿과 함께 새 탭이 열립니다. + 7. 필요한 리소스를 생성하려는 **Resource group** 을 선택하고 **Region** 을 선택합니다. 필수는 아니지만 템플릿이 실수로 생성한 구성 요소를 삭제하지 않도록 새 리소스 그룹에 템플릿을 설치하는 것이 좋습니다. + 8. **New Relic license key** 필드에 이전에 복사한 API 키를 붙여넣습니다. + 9. [New Relic 엔드포인트](/docs/logs/log-api/introduction-log-api/#endpoint) 가 귀하의 계정에 해당하는 엔드포인트로 설정되어 있는지 확인하십시오. -10. 선택 사항: 전달할 [Azure 구독 활동 로그](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log) 를 `true` 로 설정합니다. 자세한 내용은 이 문서 [의 구독 정보](#subscription-activity-logs) 를 참조하세요. -11. **Review + create** 클릭하고 삽입한 데이터를 검토한 후 **Create** 클릭합니다. + +10. 확대/축소 모드를 선택하세요. 기본값은 `Basic` 입니다. + +11. 선택 사항: EventHub 일괄 처리를 구성하여 성능을 최적화합니다(v2.8.0 이상에서 사용 가능): + + * **Max Event Batch Size**: 배치당 최대 이벤트 (기본값: 500, 최소값: 1) + * **Min Event Batch Size**: 배치당 최소 이벤트 (기본값: 20, 최소값: 1) + * **최대 대기 시간**: 배치 생성에 소요되는 최대 대기 시간(HH:MM:SS 형식, 기본값: 00:00:30) + +12. 선택 사항: 전달할 [Azure 구독 활동 로그](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log) 를 `true` 로 설정합니다. 자세한 내용은 이 문서 [의 구독 정보](#subscription-activity-logs) 를 참조하세요. + +13. **Review + create** 클릭하고 삽입한 데이터를 검토한 후 **Create** 클릭합니다. 템플릿은 멱등원입니다. Event Hub에서 로그 전달을 시작한 다음 10단계를 완료하여 동일한 템플릿을 다시 실행하여 [Azure 구독 활동 로그](#subscription-activity-logs) 전달을 구성할 수 있습니다. +### EventHub 배치 처리 및 스케일링을 구성합니다(선택 사항). [#eventhub-configuration] + +버전 2.8.0부터 ARM 템플릿은 성능 및 처리량을 최적화하기 위한 고급 EventHub 설정 옵션을 지원합니다. + +**EventHub 트리거 일괄 처리에 대해 반응합니다:** + +일괄 처리 동작을 구성하여 이벤트 처리 방식을 제어할 수 있습니다. 이러한 설정은 Azure Function 애플리케이션 설정으로 구성됩니다. + +* **최대 이벤트 배치 크기** : 일괄적으로 함수에 전달되는 최대 이벤트 수(기본값: 500, 최소: 1). 이는 동시에 처리되는 이벤트의 상한을 제어합니다. + +* **Min Event Batch Size** : 일괄적으로 함수에 전달되는 최소 이벤트 수(기본값: 20, 최소: 1). 함수는 최대 대기 시간에 도달하지 않는 한, 최소한 이만큼의 이벤트가 누적될 때까지 기다린 후 처리합니다. + +* **최대 대기 시간** : 함수에 전달하기 전에 배치를 구성하는 데 걸리는 최대 대기 시간(기본값: 00:00:30, 형식: HH:MM:SS). 이를 통해 이벤트 발생량이 적을 때에도 적시에 처리할 수 있습니다. + +이러한 매개변수는 요청 볼륨 및 처리 요구 사항을 기반으로 처리량 및 리소스 활용을 최적화하는 데 도움이 됩니다. 사용 사례에 따라 이러한 값을 조정하십시오. + +* 대량 처리 시나리오에서는 배치 크기를 늘려 처리량을 향상시키세요. +* 짧은 시간 요구 사항의 경우 배치 크기를 줄이십시오. +* 지연 시간과 배치 효율성 간의 균형을 맞추도록 대기 시간을 조정하세요. + +**스케일링 설정(v2.7.0+):** + +이 템플릿은 Azure Functions 확장 모드 구성을 지원하므로 워크로드에 따라 비용과 성능을 최적화할 수 있습니다. + +* **기본 확장 모드**: 기본적으로 동적 SKU(Y1 티어) 소비 기반 플랜을 사용하며, Azure 수신 이벤트 수에 따라 함수 인스턴스를 자동으로 추가 및 제거합니다. + + * `disablePublicAccessToStorageAccount` 옵션이 활성화된 경우 VNet 통합을 지원하기 위해 기본 SKU(B1 티어) 플랜을 사용합니다. + * 이 모드는 가변적인 작업에 이상적이며, 실행 횟수에 따른 가격 책정을 통해 자동 비용 최적화를 제공합니다. + * EventHub 네임스페이스에는 표준 처리량 단위 확장을 사용하는 4개의 파티션이 포함되어 있습니다. + +* **엔터프라이즈 확장 모드**: 전용 컴퓨트 리소스와 인스턴스 확장에 대한 더 많은 제어 기능을 통해 고급 확장 기능을 제공합니다. 이 모드는 다음을 제공합니다: + + * Function App과 EventHub 모두에 자동 스케일링 기능이 추가되었습니다. + * 사이트별 확장이 활성화된 Elastic Premium(EP1) 호스팅 플랜 + * EventHub 자동 인플레이션 기능이 활성화되었으며 최대 처리량 단위는 40입니다. + * 병렬 처리 성능 향상을 위해 파티션 수를 늘렸습니다(기본 모드의 4개 파티션 대비 32개 파티션). + * 예열된 기능으로 예측 가능한 성능 및 지연 시간 감소 + * 대용량, 중요 업무용 로그 포워딩 시나리오에 더 적합 + +**중요 사항:** + +* Basic 모드에서 Enterprise 모드로 업그레이드할 경우, Azure의 제한 사항(Standard SKU는 생성 후 파티션 수를 변경할 수 없음)으로 인해 EventHub를 다시 프로비저닝해야 합니다. + ### 선택 사항: 구독에서 Azure 활동 로그 보내기 [#subscription-activity-logs] diff --git a/src/i18n/content/kr/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx b/src/i18n/content/kr/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx index 5d376a5c15a..0007967e6ed 100644 --- a/src/i18n/content/kr/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx +++ b/src/i18n/content/kr/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx @@ -95,7 +95,11 @@ ANR 오류가 발생하면 Android는 그리드 추적을 캡처합니다. 스 **난독화 해제:** -뉴렐릭은 현재 플랫폼 내에서 ANR 그리드 추적을 자동으로 해독하지 않습니다. 이 기능에 대한 지원은 향후 릴리스에서 제공될 예정입니다. 그동안 뉴렐릭에서 난독화된 ANR 헬리콥터 추적을 다운로드한 다음 Proguard/R8의 `ndk-stack` 또는 `retrace` 유틸리티와 같은 오프라인 도구를 사용하여 수동으로 헬리콥터 추적을 심볼화할 수 있습니다. +뉴렐릭은 ANR 그리드 추적에서 배터리 그리드 프레임을 자동으로 상징화하여 플랫폼에서 직접 읽을 수 있는 메서드 이름과 라인 번호를 제공합니다. + + + 현재 네이티브(NDK) 스택 프레임은 심볼화되지 않습니다. 기본 그리드 프레임의 경우 뉴렐릭에서 그리드 추적을 다운로드하고 `ndk-stack` 와 같은 오프라인 도구를 사용하여 수동으로 기호화할 수 있습니다. + ## ANR 모니터링 비활성화 [#disable-anr-monitoring] diff --git a/src/i18n/content/kr/docs/release-notes/docs-release-notes/docs-12-19-2025.mdx b/src/i18n/content/kr/docs/release-notes/docs-release-notes/docs-12-19-2025.mdx new file mode 100644 index 00000000000..04ac73217b2 --- /dev/null +++ b/src/i18n/content/kr/docs/release-notes/docs-release-notes/docs-12-19-2025.mdx @@ -0,0 +1,79 @@ +--- +subject: Docs +releaseDate: '2025-12-19' +version: 'December 15 - December 19, 2025' +translationType: machine +--- + +### 새로운 문서 + +* 좌절 신호와 성능이 사용자 환경에 미치는 영향을 이해하기 위한 포괄적인 지침을 제공하기 위해 [사용자 영향을](/docs/browser/new-relic-browser/browser-pro-features/user-impact) 추가했습니다. + +### 주요 변경 사항 + +* [활동 카탈로그를](/docs/workflow-automation/setup-and-configuration/actions-catalog) 대규모로 개편하고 활동을 조직화하여 업데이트했습니다. +* 업데이트된 [브라우저 로그인: 자동 및 수동 로그인 캡처 업데이트를 시작하세요](/docs/browser/browser-monitoring/browser-pro-features/browser-logs/get-started). +* 업데이트된 [페이지 조회수: 사용자 불편 신호 및 성능 영향 정보를 통해 페이지 성능을 분석하세요](/docs/browser/new-relic-browser/browser-pro-features/page-views-examine-page-performance). +* SAP 솔루션 데이터 제공자를 위한 자세한 지침을 제공하기 위해 [데이터 제공자 참조를](/docs/sap-solutions/additional-resources/data-providers-reference) 추가했습니다. + +### 사소한 변경 사항 + +* [Kubernetes에 eBPF 네트워크 옵저버빌리티 설치](/docs/ebpf/k8s-installation) 및 [Linux에 eBPF 네트워크 옵저버빌리티 설치](/docs/ebpf/linux-installation) 에 eBPF 필터 설정 문서를 추가했습니다. +* [Agentic AI 모델의 컨텍스트 프로토콜 설정이](/docs/agentic-ai/mcp/setup) 개선되었으며, 설정 방법 안내가 향상되었습니다. +* Kinesis Data Streams 및 Drupal 11.1/11.2와의 [PHP 에이전트 호환성 및 요구 사항이](/docs/apm/agents/php-agent/getting-started/php-agent-compatibility-requirements) 업데이트되었습니다. 호환성. +* 의존성/종속성에 대한 최신 검증된 호환 버전으로 [.NET 에이전트 호환성 및 요구 사항이](/docs/apm/agents/net-agent/getting-started/net-agent-compatibility-requirements) 업데이트되었습니다. +* 최신 호환성 보고서를 반영하여 [Node.js 에이전트 호환성 및 요구 사항을](/docs/apm/agents/nodejs-agent/getting-started/compatibility-requirements-nodejs-agent) 업데이트했습니다. +* 현재 호환성 정보로 [배터리 교체 및 요구 사항](/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent) 이 업데이트되었습니다. +* 컨테이너화된 Azure Functions에 대한 명시적 설치 명령이 [포함된 향상된 AWS Lambda 함수입니다](/docs/serverless-function-monitoring/azure-function-monitoring/container). +* kTranslate의 최신 Ubuntu 버전 지원을 통해 [네트워크 흐름 모니터링 기능이](/docs/network-performance-monitoring/setup-performance-monitoring/network-flow-monitoring) 업데이트되었습니다. +* 새로운 컨테이너 함수 지원을 반영하여 [Lambda를 APM 환경으로 업그레이드](/docs/serverless-function-monitoring/aws-lambda-monitoring/instrument-lambda-function/upgrade-to-apm-experience) 했습니다. +* 새로운 소식 게시물을 추가했습니다: + * [트랜잭션 360](/whats-new/2025/12/whats-new-12-15-transaction-360) + +### 릴리즈 정보 + +* 최신 릴리스에 대한 최신 정보를 받아보세요. + + * [PHP 에이전트 v12.3.0.28](/docs/release-notes/agent-release-notes/php-release-notes/php-agent-12-3-0-28): + + * AWS-sdk-php Kinesis Data Streams 측정, 리소스를 추가했습니다. + * 데몬이 재시작 시 패키지 캐시를 지우지 않던 문제를 수정했습니다. + * Go 언어 버전을 1.25.5로 업데이트했습니다. + + * [Node.js 에이전트 v13.8.1](/docs/release-notes/agent-release-notes/nodejs-release-notes/node-agent-13-8-1): + * 존재하지 않는 경우 래핑 핸들러 콜백을 건너뛰도록 AWS Lambda 측정, 도구를 업데이트했습니다. + + * [자바 에이전트 v8.25.1](/docs/release-notes/agent-release-notes/java-release-notes/java-agent-8251): + * Kotlin 코루틴에서 `CancellableContinuation` 의 타사 구현 관련 오류를 수정했습니다. + + * [브라우저 에이전트 v1.306.0](/docs/release-notes/new-relic-browser-release-notes/browser-agent-release-notes/browser-agent-v1.306.0): + + * 별도의 RUM 플래그를 통해 로그 API에 대한 제어 기능을 추가했습니다. + * onTTFB를 사용하기 전에 responseStart에 대한 유효성 검사를 강화했습니다. + * 웹팩 출력에서 줄 바꿈 구문을 제거했습니다. + + * [Kubernetes 통합 v3.51.1](/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-1): + * 차트 버전 newrelic-인프라-3.56.1 및 nri-bundle-6.0.30과 함께 출시되었습니다. + + * [NRDOT v1.7.0](/docs/release-notes/nrdot-release-notes/nrdot-2025-12-15): + * nrdot-수집기 실험 배포판에 ohi 구성 요소를 추가했습니다. + + * [NRDOT v1.6.0](/docs/release-notes/nrdot-release-notes/nrdot-2025-12-12): + + * 호텔 구성 요소 버전을 v0.135.0에서 v0.141.0으로 업데이트했습니다. + * golang 버전을 1.24.11로 업데이트하여 CVE-2025-61729 문제를 해결했습니다. + * 0.119.0 버전의 transformprocessor 구성 사용 중단 문제를 해결했습니다. + + * [Job Manager 릴리스 493](/docs/release-notes/synthetics-release-notes/job-manager-release-notes/job-manager-release-493): + + * 최소 API 버전이 1.44로 업데이트되면서 발생했던 도커 29 호환성 문제를 수정했습니다. + * 작업 실패 결과를 가리기 위해 민감한 정보에 대한 데이터 마스킹 기능을 추가했습니다. + + * [Node 브라우저 런타임 rc1.5](/docs/release-notes/synthetics-release-notes/node-browser-runtime-release-notes/node-browser-runtime-rc1.5): + * 최신 변경 사항이 반영된 업데이트 버전입니다. + + * [Node API 런타임 rc1.5](/docs/release-notes/synthetics-release-notes/node-api-runtime-release-notes/node-api-runtime-rc1.5): + * 최신 변경 사항이 반영된 업데이트 버전입니다. + + * [Node 브라우저 런타임 rc1.6](/docs/release-notes/synthetics-release-notes/node-browser-runtime-release-notes/node-browser-runtime-rc1.6): + * 최신 변경 사항이 반영된 업데이트 버전입니다. \ No newline at end of file diff --git a/src/i18n/content/kr/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-2.mdx b/src/i18n/content/kr/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-2.mdx new file mode 100644 index 00000000000..45ec7405d23 --- /dev/null +++ b/src/i18n/content/kr/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-2.mdx @@ -0,0 +1,13 @@ +--- +subject: Kubernetes integration +releaseDate: '2025-12-23' +version: 3.51.2 +translationType: machine +--- + +변경 사항에 대한 자세한 설명은 [릴리스 노트를](https://github.com/newrelic/nri-kubernetes/releases/tag/v3.51.2) 참조하십시오. + +이 통합은 다음 차트 버전에 포함되어 있습니다. + +* [뉴렐릭 인프라 3.56.2](https://github.com/newrelic/nri-kubernetes/releases/tag/newrelic-infrastructure-3.56.2) +* [nri-bundle-6.0.31](https://github.com/newrelic/helm-charts/releases/tag/nri-bundle-6.0.31) \ No newline at end of file diff --git a/src/i18n/content/kr/docs/release-notes/nrdot-release-notes/nrdot-2025-12-19.mdx b/src/i18n/content/kr/docs/release-notes/nrdot-release-notes/nrdot-2025-12-19.mdx new file mode 100644 index 00000000000..1b624084645 --- /dev/null +++ b/src/i18n/content/kr/docs/release-notes/nrdot-release-notes/nrdot-2025-12-19.mdx @@ -0,0 +1,17 @@ +--- +subject: NRDOT +releaseDate: '2025-12-19' +version: 1.8.0 +metaDescription: Release notes for NRDOT Collector version 1.8.0 +translationType: machine +--- + +## 변경 로그 + +### 특징 + +* 기능: otel 구성 요소 버전을 v0.141.0에서 v0.142.0으로 업데이트합니다(#464). + +### 버그 수정 + +* 수정: CVE-2025-68156(#468)을 해결하기 위해 expr-lang/expr:1.17.7 버전을 강제로 사용합니다. \ No newline at end of file diff --git a/src/i18n/content/kr/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx b/src/i18n/content/kr/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx index 820ff6c5e48..2f13b577863 100644 --- a/src/i18n/content/kr/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx +++ b/src/i18n/content/kr/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx @@ -36,7 +36,7 @@ translationType: machine - diff --git a/src/i18n/content/kr/docs/service-architecture-intelligence/github-integrations/github-cloud-integration.mdx b/src/i18n/content/kr/docs/service-architecture-intelligence/github-integrations/github-cloud-integration.mdx new file mode 100644 index 00000000000..c37ead49691 --- /dev/null +++ b/src/i18n/content/kr/docs/service-architecture-intelligence/github-integrations/github-cloud-integration.mdx @@ -0,0 +1,169 @@ +--- +title: GitHub 클라우드 통합을 통한 서비스 아키텍처 인텔리전스 +tags: + - New Relic integrations + - GitHub integration +metaDescription: 'Learn how to integrate GitHub with New Relic to import repositories, teams, and user data for enhanced service architecture intelligence.' +freshnessValidatedDate: never +translationType: machine +--- + +GitHub 통합은 GitHub 조직의 컨텍스트로 뉴렐릭 데이터를 풍부하게 하여 리뷰 인텔리전스를 강화합니다. GitHub 계정을 연결하면 로그, 팀을 가져오고 요청 데이터를 뉴렐릭으로 가져올 수 있습니다. 이 추가 정보는 [팀](/docs/service-architecture-intelligence/teams/teams), [카탈로그](/docs/service-architecture-intelligence/catalogs/catalogs) 및 [스코어카드](/docs/service-architecture-intelligence/scorecards/getting-started) 의 가치를 강화하여 엔지니어링 작업에 대한 더욱 완벽하고 통합적인 시각을 제공합니다. + +## 시작하기 전에 + +**필수 조건:** + +* 조직 관리자 또는 인증 도메인 관리자 역할 중 하나가 있어야 합니다. + +**지원되는 플랫폼:** + +* GitHub 클라우드 +* GitHub Enterprise Cloud (데이터 상주 없음) + +**지원 지역:** 미국 및 EU 지역 + + + * 데이터 상주 기능을 사용하는 GitHub Enterprise Server 및 GitHub Enterprise Cloud는 지원되지 않습니다. + * GitHub 사용자 계정에 통합 기능을 설치하는 것은 지원되지 않습니다. GitHub에서 사용자 수준으로 앱을 설치할 수는 있지만, 동기화 프로세스는 작동하지 않으며 뉴렐릭으로 데이터가 가져와지지 않습니다. + * GitHub 통합은 FedRAMP 규정을 준수하지 않습니다. + + +## 어떤 데이터를 동기화할 수 있나요? + +GitHub 통합을 사용하면 뉴렐릭으로 가져올 데이터 유형을 선택적으로 선택할 수 있으므로 어떤 정보가 동기화되는지 제어할 수 있습니다. + +### 사용 가능한 데이터 유형 + +* **표면 및 풀 요청**: 더 나은 코드 가시성 및 구현, 배포 추적을 위해 데이터를 가져오고 끌어옵니다. + +* **팀**: GitHub 팀과 멤버십을 가져와 팀 관리 및 소유권 매핑을 향상합니다. + + + **팀 통합 충돌**: 팀이 이미 Okta 또는 다른 ID 공급자를 통해 뉴렐릭에 통합된 경우, 데이터 충돌을 방지하기 위해 GitHub 팀을 가져와 저장할 수 없습니다. 이 경우, 사용자는 선택하고 데이터를 가져오는 것만 가능합니다.\ + **사용자 이메일 공개 요구 사항**: 팀 구성원 정보가 GitHub 팀과 일치하도록 하려면 GitHub 사용자는 GitHub 프로필 설정에서 이메일 주소를 공개로 구성해야 합니다. 이메일 설정이 비공개인 팀 구성원은 사용자 데이터 동기화 프로세스에서 제외됩니다. + + +## GitHub 통합 설정 + +1. **[one.newrelic.com > + Integration & Agents > GitHub integration](https://one.newrelic.com/marketplace/install-data-source?state=9306060d-b674-b245-083e-ff8d42765e0d)** 으로 이동합니다. + +2. 통합을 설정할 계정을 선택하세요. + +3. **Set up a new integration** \[새 통합 설정을] 선택하고 **Continue** \[계속을] 클릭합니다. + +4. **Begin integration** \[통합 시작] 화면에서: + + a. **Get started in GitHu**b \[GitHub에서 시작하기를] 클릭하여 계정을 연결하세요. 뉴렐릭 옵저버빌리티 앱이 GitHub Marketplace에서 열립니다. + + b. GitHub 조직 내에서 앱 설치를 완료하세요. 설치가 완료되면 뉴렐릭 인터페이스로 다시 이동합니다. + + c. **Begin integration** \[통합 시작]을 다시 선택하고 **Continue** \[계속] 을 클릭합니다. + + d. **Select your data preferences** \[데이터 기본 설정 선택]: 동기화할 데이터 유형을 선택하세요. + + * **Teams + Users**: GitHub 팀 구조 및 사용자 정보를 가져옵니다. + * **Repositories + Pull Requests**: 모서리를 가져오고 데이터를 가져옵니다. + * **Both**: 사용 가능한 모든 데이터 유형을 가져옵니다. + + 이자형. **Teams + Users** \[팀 + 사용자]를 선택한 경우 모든 GitHub 팀 목록이 표시됩니다. 가져올 팀을 모두 선택하거나 일부만 선택하세요. + + f. 선택한 데이터 가져오기를 시작하려면 **Start first sync** \[첫 번째 동기화 시작]을 클릭하세요. + + g. **Sync started** \[동기화 시작] 메시지가 표시된 후 **Continue** \[계속을] 클릭합니다. **Integration status** \[통합 상태] 화면에는 선택한 데이터 유형(팀, 저장소 등)의 수가 표시되며 5초마다 새로 고쳐집니다. 모든 데이터를 완전히 가져오는 데 몇 분이 걸릴 수 있습니다. + + GitHub integration + +5. *(선택 사항)* **GitHub integration** \[GitHub 통합] 화면에서 가져온 데이터에 액세스할 수 있습니다. + + * 팀 페이지에서 가져온 팀을 보려면 **Go to Teams** \[팀으로 이동]을 클릭하세요([ Teams \[팀\]](/docs/service-architecture-intelligence/teams/teams) 설정 중에 팀을 선택한 경우). + * **Go to Repositories** \[클립으로 이동을] 클릭하면 [Repositories \[클립\]](/docs/service-architecture-intelligence/repositories/repositories) 카탈로그에서 가져온 클립 정보를 볼 수 있습니다(설정 중에 클립을 선택한 경우). + +## GitHub 통합 관리 + +GitHub 통합을 설정한 후에는 뉴렐릭 인터페이스를 통해 관리할 수 있습니다. 여기에는 데이터 새로 고침, 설정 편집, 필요 시 설치 제거가 포함됩니다. + +### 액세스 통합 관리 + +1. **[one.newrelic.com > + Integration & Agents > GitHub integration](https://one.newrelic.com/marketplace/install-data-source?state=9306060d-b674-b245-083e-ff8d42765e0d)** 으로 이동합니다. + +2. **Select an action** \[작업 선택] 단계에서 **Manage your organization** \[조직 관리를] 선택하고 **Continue** \[계속을] 클릭합니다. + + Screenshot showing the manage organization option in GitHub integration + +**Manage GitHub integration** \[GitHub 통합 관리] 화면에는 연결된 조직과 해당 조직의 현재 동기화 상태 및 데이터 유형이 표시됩니다. + +### 데이터 새로 고침 + +데이터 새로 고침 옵션은 뉴렐릭에서 GitHub 데이터를 간편하게 업데이트할 수 있는 방법을 제공합니다. + +**데이터를 새로 고치려면:** + +1. **Manage GitHub integration** \[GitHub 통합 관리] 화면에서 조직을 찾으세요. + +2. 업데이트하려는 조직 옆에 있는 **Refresh data** \[데이터 새로 고침]을 클릭한 다음 **Continue** \[계속]을 클릭합니다. + +3. **Refresh Data** \[데이터 새로 고침] 단계에서 **Sync on demand** \[주문형 동기화를] 클릭합니다. + +그러면 시스템에서 GitHub 권한과 조직 액세스를 검증하고, 마지막 동기화 이후 새 데이터나 변경된 데이터만 가져오고, 선택한 데이터 유형에 따라 업데이트된 데이터를 처리하고 매핑하고, 최신 동기화 타임스탬프와 데이터 수를 반영하도록 통합 상태를 업데이트합니다. + +**새로고침되는 내용:** + +* 팀과 멤버십 +* 포인터 변경(새 리포지터리, 보관된 리포지터리, 권한 변경) +* 사용자 정의 속성을 통해 업데이트된 팀 소유권 + + + **새로 고침 빈도**: 필요한 만큼 자주 데이터를 새로 고칠 수 있습니다. 일반적으로 이 프로세스는 조직의 규모와 선택한 데이터 유형에 따라 몇 분 정도 걸립니다. + + +### 통합 설정 편집 + +초기 설정 후 통합 구성을 수정하려면 **Edit** \[편집] 옵션을 사용하십시오. GitHub와 뉴렐릭 간에 동기화할 데이터 유형과 동기화할 팀을 선택할 수 있습니다. + +**GitHub 통합을 편집하려면:** + +1. **Manage GitHub integration** \[GitHub 통합 관리] 화면에서 조직을 찾으세요. + +2. 업데이트하려는 조직 옆에 있는 **Edit** \[편집]을 클릭한 다음 **Continue** \[계속]을 클릭합니다. + +3. **Edit Integration Settings** \[통합 설정 편집] 단계에서 필요에 따라 선택 사항을 조정하십시오. + +4. **Save changes** \[변경 사항 저장을] 클릭하여 업데이트를 적용하세요. + +**편집 중에 무슨 일이 일어나는가:** + +* 설정 변경 중에도 현재 데이터는 그대로 유지됩니다. 동기화할 팀 선택이 변경된 경우, 이전 선택 사항은 뉴렐릭에서 삭제되지 않지만 GitHub와의 동기화는 되지 않습니다. 팀 기능에서 이러한 팀을 삭제할 수 있습니다. +* 새로운 설정은 후속 동기화에 적용됩니다. +* 변경 사항을 적용하기 전에 미리 볼 수 있습니다. +* 변경 사항을 저장할 때까지 이전 설정으로 통합이 계속 실행됩니다. + +### 자동 팀 소유권 설정 + +GitHub에서 `teamOwningRepo` 사용자 정의 속성으로 추가하여 GitHub 저장소를 해당 팀에 자동으로 할당할 수 있습니다. + +조직 수준에서 사용자 정의 속성을 만들고 저장소 수준에서 사용자 정의 속성에 대한 값을 할당합니다. 또한, 조직 수준에서 여러 저장소에 대한 사용자 정의 속성을 동시에 설정할 수 있습니다. + +그런 다음 뉴렐릭 Teams에서 자동 소유권 기능을 활성화하고 태그 키로 `team` 사용해야 합니다. + +이것이 설정되면 각 리포지터리가 올바른 팀과 자동으로 매칭됩니다. + +사용자 지정 속성을 만드는 방법에 대한 자세한 내용은 [GitHub 문서를](https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization) 참조하세요. + +### GitHub 통합 제거 + +GitHub 통합을 제거하면 선택한 조직의 데이터 동기화가 중지됩니다. 뉴렐릭 내에서 이전에 가져온 데이터를 보존할지 삭제할지 선택할 수 있는 옵션이 제공됩니다. + +**제거하려면:** + +1. **Manage GitHub integration** \[GitHub 통합 관리] 화면에서 제거하려는 조직을 찾아 **Uninstall** \[제거를] 클릭합니다. + +2. 확인 대화 상자에서 데이터를 유지할지 삭제할지 선택하십시오. + +3. 세부 정보를 검토하고 \[조직 제거]를 클릭하여 확인하십시오. + +4. 제거가 완료되었음을 확인하는 성공 메시지가 표시됩니다. + + + **제거 후 데이터 보존**: 보존된 데이터는 더 이상 GitHub와 동기화되지 않으며 뉴렐릭 플랫폼(예: Teams 기능) 내에서 나중에 수동으로 삭제할 수 있습니다. + \ No newline at end of file diff --git a/src/i18n/content/kr/docs/service-architecture-intelligence/github-integrations/github-enterprise-integration.mdx b/src/i18n/content/kr/docs/service-architecture-intelligence/github-integrations/github-enterprise-integration.mdx new file mode 100644 index 00000000000..1a2c2255799 --- /dev/null +++ b/src/i18n/content/kr/docs/service-architecture-intelligence/github-integrations/github-enterprise-integration.mdx @@ -0,0 +1,567 @@ +--- +title: GitHub Enterprise(온프레미스)를 활용한 서비스 아키텍처 인텔리전스 +tags: + - New Relic integrations + - GitHub Enterprise integration +metaDescription: Integrate your on-premise GitHub Enterprise (GHE) environment with New Relic using a secure collector service and GitHub App for automated data ingestion. +freshnessValidatedDate: never +translationType: machine +--- + + + 이 기능은 아직 개발 중이지만 꼭 사용해 보시기 바랍니다! + + 이 기능은 현재 [출시 전 정책](/docs/licenses/license-information/referenced-policies/new-relic-pre-release-policy) 에 따라 미리보기 프로그램의 일부로 제공됩니다. + + +온프레미스 GitHub Enterprise 계정의 데이터를 활용하여 서비스 아키텍처를 더 깊이 이해하고 싶으신가요? 뉴렐릭 GitHub Enterprise 통합은 비공개 네트워크 내에서 보안 수집기 서비스 구현하다, 배포하다를 사용하여 작업자와 팀을 뉴렐릭 플랫폼으로 직접 가져옵니다. + +새로운 선택적 데이터 가져오기 기능을 사용하면 팀, 저장소 및 풀 요청, 또는 둘 다를 포함하여 가져올 데이터 유형을 정확하게 선택할 수 있습니다. 이 통합 AI 모니터링은 뉴렐릭 내의 [팀](/docs/service-architecture-intelligence/teams/teams), [카탈로그](/docs/service-architecture-intelligence/catalogs/catalogs) 및 [스코어카드](/docs/service-architecture-intelligence/scorecards/getting-started) 의 관리 및 가시성을 향상합니다. 자세한 내용은 [서비스 아키텍처 인텔리전스 기능을](/docs/service-architecture-intelligence/getting-started) 참조하십시오. + +**전제 조건** + +* 조직 관리자 권한이 있는 GitHub Enterprise 온프레미스 계정입니다. +* GitHub Enterprise 네트워크 내에서 수집기 서비스를 실행하기 위한 도커 환경입니다. +* 통합을 생성할 수 있는 적절한 권한이 있는 뉴렐릭 계정입니다. + +## 보안 고려 사항 + +이 통합은 보안 모범 사례를 따릅니다. + +* 최소한의 필수 권한만 사용하여 GitHub 앱 인증을 사용합니다. +* 웹훅 이벤트는 비밀 키를 사용하여 인증됩니다. +* 모든 데이터 전송은 HTTPS를 통해 이루어집니다. +* 사용자 자격 증명은 저장되거나 전송되지 않습니다. +* 저장소 및 팀 데이터만 가져옵니다. + +**GitHub Enterprise 통합을 설정하려면 다음 단계를 따르세요.** + + + + ## GitHub 앱을 생성하고 구성하세요 + + GHE 인스턴스에서 **Settings → Developer Settings → GitHub Apps → New GitHub App** 으로 이동합니다. GitHub 앱 생성에 대한 자세한 지침은 [GitHub 앱 등록 관련 GitHub 문서를](https://docs.github.com/en/apps/creating-github-apps/registering-a-github-app/registering-a-github-app) 참조하세요. + + ### 권한 설정 + + 초기 동기화 시 원활한 데이터 가져오기와 이후 웹훅 이벤트의 효율적인 수신을 위해 앱 권한을 정확하게 구성하십시오. 앱 권한은 애플리케이션이 GitHub의 다양한 저장소 및 조직 리소스에 액세스할 수 있는 범위를 정의합니다. 이러한 권한을 맞춤 설정하면 보안을 강화하고 애플리케이션이 필요한 데이터에만 액세스하도록 보장하는 동시에 노출을 최소화할 수 있습니다. 적절한 설정을 통해 초기 데이터 동기화가 원활하게 이루어지고 이벤트 처리가 안정적으로 진행되어 애플리케이션이 GitHub 생태계와 최적으로 통합될 수 있습니다. + + GitHub 앱 권한에 대한 자세한 지침은 [GitHub 앱 권한 설정에 대한 GitHub 문서를](https://docs.github.com/en/apps/creating-github-apps/setting-up-a-github-app/choosing-permissions-for-a-github-app) 참조하세요. + + #### 필수 저장소 권한 + + 데이터 동기화를 활성화하려면 아래에 표시된 대로 저장소 수준 권한을 정확하게 구성하십시오. + + * **관리자 권한**: 읽기 전용 ✓ + * **확인**: 읽기 전용 ✓ + * **커밋 상태**: 선택됨 ✓ + * **내용**: 선택됨 ✓ + * **사용자 지정 속성**: 선택됨 ✓ + * **구현, 배치**: 읽기 전용 ✓ + * **메타데이터**: 읽기 전용(필수) ✓ + * **당겨주세요**: 선택됨 ✓ + * **웹훅**: 읽기 전용 ✓ + + #### 필수 조직 권한 + + 다음과 같이 조직 수준 권한을 구성하십시오. + + * **관리자 권한**: 읽기 전용 ✓ + * **사용자 지정 조직 역할**: 읽기 전용 ✓ + * **사용자 지정 속성**: 읽기 전용 ✓ + * **사용자 지정 저장소 역할**: 읽기 전용 ✓ + * **이벤트**: 읽기 전용 ✓ + * **회원**: 읽기 전용 ✓ + * **웹훅**: 읽기 전용 ✓ + + #### 웹훅 이벤트 구독 + + 실시간 동기화 및 모니터링을 위해 아래 웹훅 이벤트를 표시된 대로 정확하게 선택하십시오. + + **✓ 다음 이벤트를 선택하세요:** + + * `check_run` - 실행 상태 업데이트를 확인하세요 + * `check_suite` - 전체 구성 여부를 확인하세요 + * `commit_comment` - 커밋에 대한 댓글 + * `create` - 브랜치 또는 태그 생성 + * `custom_property` - 팀 배정을 위한 사용자 지정 속성 변경 + * `custom_property_values` - 사용자 지정 속성 값 변경 + * `delete` - 브랜치 또는 태그 삭제 + * `deployment` - 구현, 배포 활동 + * `deployment_review` - 구현, 배포 검토 프로세스 + * `deployment_status` - 구현, 배포 상태 업데이트 + * `fork` - 저장소 포크 이벤트 + * `installation_target` - GitHub 앱 설치 변경 사항 + * `label` - 이슈 및 풀 리퀘스트의 레이블 변경 + * `member` - 회원 프로필 변경 + * `membership` - 회원 추가 및 삭제 + * `meta` - GitHub 앱 메타데이터 변경 사항 + * `milestone` - 주요 변경 사항 + * `organization` - 조직 차원의 변화 + * `public` - 저장소 가시성 변경 사항 + * `pull_request` - 활동을 당겨주세요 + * `pull_request_review` - 풀 요청 검토 활동 + * `pull_request_review_comment` - 댓글 활동 검토 + * `pull_request_review_thread` - 풀 요청 검토 스레드 활동 + * `push` - 코드 푸시 및 커밋 + * `release` - 발행물 및 업데이트 + * `repository` - 저장소 생성, 삭제 및 수정 + * `star` - 저장소 스타 이벤트 + * `status` - 커밋 상태 업데이트 + * `team` - 팀 생성 및 수정 + * `team_add` - 팀원 추가 + * `watch` - 저장소 감시 이벤트 + + + **보안 모범 사례**: 보안 위험을 줄이려면 최소 권한 접근 원칙을 따르고 통합 요구 사항에 필요한 최소한의 권한만 부여하십시오. + + + ### 웹훅 설정 + + Webhook URL을 구성하고 보안 통신을 위한 맞춤형 대시보드 비밀을 생성하세요. + + * **Webhook URL**: 수집기 서비스 구현, 배포에 따라 다음 형식을 사용합니다. + + * HTTP의 경우: `http://your-domain-name/github/sync/webhook` + * HTTPS의 경우: `https://your-domain-name/github/sync/webhook` + + **예**: 수집기 서비스가 구현하다, 배포하다 at `collector.yourcompany.com` 인 경우 웹훅 URL은 다음과 같습니다. `https://collector.yourcompany.com/github/sync/webhook` + + * **이벤트 시크릿**: 웹훅 인증을 위한 안전한 임의 문자열(32자 이상)을 생성합니다. 이 값은 `GITHUB_APP_WEBHOOK_SECRET` 환경 변수에 필요하므로 저장해 두세요. + + ### 키 생성 및 변환 + + 1. GitHub 앱을 생성한 후에는 개인 키를 생성해야 합니다. GitHub 앱 설정에서 **Generate a private key** \[개인 키 생성]을 클릭하세요. 앱은 자동으로 고유한 앱 ID와 개인 키 파일(.pem 형식)을 생성하고 다운로드합니다. 이 정보는 수집기 서비스 설정에 필요하므로 안전하게 보관하십시오. + + 2. 다운로드한 개인 키 파일을 DER 형식으로 변환한 다음 Base64로 인코딩하세요. + + **1단계: .pem 파일을 DER 형식으로 변환합니다.** + + ```bash + openssl rsa -outform der -in private-key.pem -out output.der + ``` + + **2단계: DER 파일을 Base64로 인코딩합니다.** + + ```bash + # For Linux/macOS + base64 -i output.der -o outputBase64 + cat outputBase64 # Copy this output + + # For Windows (using PowerShell) + [Convert]::ToBase64String([IO.File]::ReadAllBytes("output.der")) + + # Alternative for Windows (using certutil) + certutil -encode output.der temp.b64 && findstr /v /c:- temp.b64 + ``` + + 결과로 생성된 Base64 문자열을 복사하여 수집기 설정에서 `GITHUB_APP_PRIVATE_KEY` 환경 변수의 값으로 사용하십시오. + + **✓ 성공 지표:** + + * GitHub 앱이 성공적으로 생성되었습니다. + * 앱 ID와 개인 키는 안전하게 저장됩니다. + * 웹훅 URL이 구성되어 있고 접근 가능합니다. + + + + ## 환경 변수를 준비합니다. + + 수집기 서비스를 구현하거나 배포하기 전에 다음 정보를 수집합니다. + + ### 필수 환경 변수 + +
+ Solution
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ 변하기 쉬운 + + 원천 + + 얻는 방법 +
+ `NR_API_KEY` + + New Relic + + 쿠렐릭 대시보드에서 API 키를 생성합니다. +
+ `NR_LICENSE_KEY` + + New Relic + + 뉴롤릭 대시보드에서 볼륨 키를 생성하세요. +
+ `GHE_BASE_URL` + + GHE 서버 + + GHE 서버의 기본 URL(예: + + `https://source.datanot.us` + + ). +
+ `GITHUB_APP_ID` + + GitHub 앱 + + GitHub 앱을 생성할 때 생성되는 고유한 앱 ID입니다. +
+ `GITHUB_APP_PRIVATE_KEY` + + GitHub 앱 + + 개인 키( + + `.pem` + + ) 파일의 내용이 Base64 문자열로 변환되었습니다. 변환 방법은 1단계를 참조하십시오. +
+ `GITHUB_APP_WEBHOOK_SECRET` + + GitHub 앱 + + GitHub 앱을 생성할 때 설정한 사용자 정의 대시보드 비밀 값입니다. +
+ + ### 선택적 SSL 환경 변수 + + 다음은 API를 HTTPS로 만들기 위한 선택적 환경 변수입니다. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ 선택적 변수 + + 원천 + + 얻는 방법 +
+ `SERVER_SSL_KEY_STORE` + + SSL 설정 + + HTTPS 설정을 위한 SSL 키 저장소 파일의 경로입니다. 아래의 SSL 인증서 설정 지침을 참조하십시오. +
+ `SERVER_SSL_KEY_STORE_PASSWORD` + + SSL 설정 + + SSL 키 저장소 파일의 비밀번호입니다. 이는 PKCS12 키 저장소를 생성할 때 설정한 암호입니다. +
+ `SERVER_SSL_KEY_STORE_TYPE` + + SSL 설정 + + SSL 키 저장소 유형(예: PKCS12, JKS). 아래의 SSL 설정 지침을 따를 때 PKCS12를 사용하십시오. +
+ `SERVER_SSL_KEY_ALIAS` + + SSL 설정 + + 키 저장소 내 SSL 키의 별칭입니다. 이 이름은 키 저장소를 생성할 때 지정하는 이름입니다. +
+ `SERVER_PORT` + + SSL 설정 + + HTTPS 통신용 서버 포트입니다. HTTPS를 사용하려면 포트 번호 8443을 사용하십시오. +
+ + ### SSL 인증서 설정 지침 + + HTTPS 설정을 위해 신뢰할 수 있는 인증 기관(CA)으로부터 SSL 인증서를 받으려면 다음 단계를 따르십시오. + + 1. **개인 키와 인증서 서명 요청(CSR)을 생성합니다**. + + ```bash + openssl req -new -newkey rsa:2048 -nodes -keyout mycert.key -out mycert.csr + ``` + + 2. **선택한 CA에 CSR 제출**: `mycert.csr` 파일을 선택한 인증 기관(예: DigiCert, Let's Encrypt, GoDaddy)에 제출하세요. + + 3. **도메인 유효성 검사 완료**: CA(인증 기관)의 지시에 따라 필요한 도메인 유효성 검사 단계를 모두 완료하십시오. + + 4. **인증서 다운로드**: CA에서 발급된 인증서 파일(일반적으로 `.crt` 또는 `.pem` 파일)을 다운로드합니다. + + 5. **PKCS12 키 저장소 생성**: 인증서와 개인 키를 PKCS12 키 저장소로 결합합니다. + + ```bash + openssl pkcs12 -export -in mycert.crt -inkey mycert.key -out keystore.p12 -name mycert + ``` + + 6. **키스토어 사용**: 생성된 `keystore.p12` 파일을 도커 설정의 `SERVER_SSL_KEY_STORE` 값으로 사용합니다. + + + + ## 수집기 서비스를 구현하다, 배포하다 + + 수집기 서비스는 도커 이미지로 제공됩니다. 구현, 배포는 다음 두 가지 방법 중 하나로 수행될 수 있습니다. + + ### 옵션 A: 도커 컴포즈 사용 (권장) + + 서비스의 다운로드 및 구현, 배포를 자동화하는 도커 Compose 파일을 만듭니다. + + 1. 다음 내용을 포함하는 `docker-compose.yml` 파일을 생성하세요: + + ```yaml + version: '3.9' + + services: + nr-ghe-collector: + image: newrelic/nr-ghe-collector:tag # use latest tag available in dockerhub starting with v* + container_name: nr-ghe-collector + restart: unless-stopped + ports: + - "8080:8080" # HTTP port, make 8443 in case of HTTPS + environment: + # Required environment variables + - NR_API_KEY=${NR_API_KEY:-DEFAULT_VALUE} + - NR_LICENSE_KEY=${NR_LICENSE_KEY:-DEFAULT_VALUE} + - GHE_BASE_URL=${GHE_BASE_URL:-DEFAULT_VALUE} + - GITHUB_APP_ID=${GITHUB_APP_ID:-DEFAULT_VALUE} + - GITHUB_APP_PRIVATE_KEY=${GITHUB_APP_PRIVATE_KEY:-DEFAULT_VALUE} + - GITHUB_APP_WEBHOOK_SECRET=${GITHUB_APP_WEBHOOK_SECRET:-DEFAULT_VALUE} + + # Optional SSL environment variables (uncomment and configure if using HTTPS) + # - SERVER_SSL_KEY_STORE=${SERVER_SSL_KEY_STORE} + # - SERVER_SSL_KEY_STORE_PASSWORD=${SERVER_SSL_KEY_STORE_PASSWORD} + # - SERVER_SSL_KEY_STORE_TYPE=${SERVER_SSL_KEY_STORE_TYPE} + # - SERVER_SSL_KEY_ALIAS=${SERVER_SSL_KEY_ALIAS} + # - SERVER_PORT=8443 + #volumes: # Uncomment the line below if using SSL keystore + # - ./keystore.p12:/app/keystore.p12 # path to your keystore file + network_mode: bridge + + networks: + nr-network: + driver: bridge + ``` + + 2. 도커 Compose 파일에서 `DEFAULT_VALUE` 플레이스홀더를 실제 값으로 바꿔 환경 변수를 설정하거나, 명령어를 실행하기 전에 시스템에 환경 변수를 생성하세요. + + + 비밀 정보가 포함된 환경 변수 파일은 절대로 버전 관리 시스템에 커밋하지 마십시오. 실제 운영 환경에서는 안전한 비밀 키 관리 방식을 사용하십시오. + + + 3. 서비스를 시작하려면 다음 명령을 실행하십시오. + + ```bash + docker-compose up -d + ``` + + ### 옵션 B: 도커 이미지 직접 실행 + + [도커 허브 레지스트리](https://hub.docker.com/r/newrelic/nr-ghe-collector) 에서 직접 도커 이미지를 다운로드하고 조직이 선호하는 CI/CD 파이프라인 또는 구현, 배포 방법을 사용하여 실행할 수 있습니다. 참고로 고객은 수집기 서비스를 시작할 때 위에 나열된 모든 환경 변수를 전달해야 합니다. + + **✓ 성공 지표:** + + * Collector 서비스가 실행 중이며 구성된 포트에서 접근 가능합니다. + * 도커 컨테이너 로그인은 오류 없이 성공적인 시작을 표시합니다. + * 서비스는 (설정된 경우) 상태 점검에 응답합니다. + + + + ## 조직에 GitHub 앱을 설치하세요 + + 수집기 서비스가 실행된 후에는 통합하려는 특정 조직에 GitHub 앱을 설치해야 합니다. + + 1. GitHub Enterprise 인스턴스로 이동합니다. + 2. **Settings** → **Developer Settings** → **GitHub Apps** 으로 이동하세요. + 3. 1단계에서 생성한 GitHub 앱을 찾아서 클릭하세요. + 4. 왼쪽 사이드바에서 **Install App** \[앱 설치]를 클릭하세요. + 5. 앱을 설치할 조직을 선택하세요. + 6. 모든 저장소에 설치할지, 아니면 특정 저장소만 선택할지 결정하세요. + 7. 설치를 완료하려면 **Install** \[설치]를 클릭하십시오. + + **✓ 성공 지표:** + + * 웹훅 전달 내역은 GitHub 앱 설정에 표시됩니다. + * 수집기 서비스 로그에 인증 오류가 없습니다. + + + + ## 뉴렐릭 UI에서 통합 설정 완료 + + 수집기 서비스가 실행되고 GitHub 앱이 GHE 조직에 설치되면 뉴럴릭 UI 의 안내에 따라 통합 설정을 완료하십시오. + + 1. 해당 GHE 조직이 뉴렐릭 UI 에 표시됩니다. + + 2. 초기 데이터 동기화를 시작하려면 **First time sync** \[최초 동기화]를 클릭하세요. + + 3. *(선택 사항)* 수동으로 데이터를 동기화하려면 **On-demand sync** \[주문형 동기화를] 클릭합니다. + + + 4시간마다 수동으로 데이터를 동기화할 수 있습니다. 동기화가 지난 4시간 이내에 발생한 경우 **On-demand sync** \[주문형 동기화] 버튼은 비활성화됩니다. + + + 4. 동기화 시작 메시지가 표시되면 **Continue** \[계속을] 클릭하십시오. **GitHub Enterprise Integration** \[GitHub Enterprise 통합] 화면에는 팀 수와 저장소 수가 표시되며 5초마다 새로 고쳐집니다. 모든 데이터를 완전히 가져오는 데 15-30분이 소요됩니다(소식 저장소 수에 따라 소요 시간이 달라질 수 있습니다). + + GitHub Enterprise Integration dashboard showing integration progress + + ### 데이터 보기 + + **GitHub Enterprise Integration** \[GitHub Enterprise 통합] 화면에서: + + * 가져온 팀 정보를 [Teams](/docs/service-architecture-intelligence/teams/teams) 에서 보려면 **Go to Teams** \[Teams로 이동]을 클릭하세요. + * [Catalogs](/docs/service-architecture-intelligence/catalogs/catalogs) 에서 가져온 가져온 정보를 보려면, **Go to Repositories** \[다음으로 이동] 을 클릭하세요. + + + + ## 팀 배정을 구성합니다(선택 사항). + + GitHub Enterprise에서 `teamOwningRepo` 사용자 지정 속성으로 추가하면 GitHub 저장소를 팀에 자동으로 할당할 수 있습니다. + + 1. 조직 수준에서 사용자 정의 속성을 만들고 저장소 수준에서 사용자 정의 속성에 대한 값을 할당합니다. 또한, 조직 수준에서 여러 저장소에 대한 사용자 정의 속성을 동시에 설정할 수 있습니다. + 2. 그런 다음 뉴렐릭 Teams에서 [자동 소유권](/docs/service-architecture-intelligence/teams/manage-teams/#assign-ownership) 기능을 활성화하고 태그 키로 `team` 사용해야 합니다. + + 이 설정이 완료되면 뉴렐릭은 각 리포지터리를 해당 팀과 자동으로 매칭합니다. + + 사용자 지정 속성을 만드는 방법에 대한 자세한 내용은 [GitHub 문서를](https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization) 참조하세요. + + + +## 문제점 해결 + +### 일반적인 문제점 및 해결책 + +**웹훅 전달 실패:** + +* GitHub Enterprise에서 수집기 서비스가 실행 중이고 접근 가능한지 확인하십시오. +* 방화벽 설정과 네트워크 연결 상태를 확인하십시오. + +**인증 오류:** + +* GitHub 앱 ID와 개인 키가 올바르게 구성되었는지 확인하세요. +* 개인 키가 DER 형식으로 올바르게 변환되고 Base64로 인코딩되었는지 확인하십시오. +* GitHub App과 수집기 설정 간에 웹훅 비밀이 일치하는지 확인하세요. + +**동기화 실패:** + +* GitHub 앱에 필요한 권한이 있는지 확인하세요. +* 앱이 올바른 조직에 설치되었는지 확인하십시오. +* 특정 오류 메시지를 확인하려면 수집기 서비스 로그를 검토하십시오. + +**네트워크 연결 문제:** + +* 수집기 서비스가 GitHub Enterprise 인스턴스에 연결할 수 있는지 확인하십시오. +* HTTPS를 사용하는 경우 SSL 인증서가 올바르게 구성되었는지 확인하십시오. +* GitHub Enterprise 도메인에 대한 DNS 확인을 확인하세요. + +## 제거 + +GitHub Enterprise 통합을 제거하려면 다음 단계를 따르세요. + +1. GitHub Enterprise UI로 이동합니다. +2. 앱이 설치된 조직의 설정으로 이동하세요. +3. GitHub Enterprise 인터페이스에서 GitHub 앱을 직접 제거하세요. 이 작업을 수행하면 백앤드 프로세스가 데이터 수집을 중지합니다. +4. 도커 환경에서 수집기 서비스를 중지하고 제거하십시오. \ No newline at end of file diff --git a/src/i18n/content/pt/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent.mdx b/src/i18n/content/pt/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent.mdx index bdd7c7206b3..ffeea39f0d7 100644 --- a/src/i18n/content/pt/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent.mdx +++ b/src/i18n/content/pt/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent.mdx @@ -141,7 +141,7 @@ O agente automaticamente utiliza estes frameworks e bibliotecas: * Pulverize 1.3.1 para o mais recente * Tomcat 7.0.0 até o mais recente * Undertow 1.1.0.Final para a versão mais recente - * WebLogic 12.1.2.1 a 12.2.x (exclusivo) + * WebLogic 12.1.2.1 a 14.1.1 * WebSphere 8 a 9 (exclusivo) * WebSphere Liberty 8.5 até a versão mais recente * Wildfly 8.0.0.Final para a versão mais recente diff --git a/src/i18n/content/pt/docs/cci/azure-cci.mdx b/src/i18n/content/pt/docs/cci/azure-cci.mdx index b2e8a19aca3..ac460d78ec5 100644 --- a/src/i18n/content/pt/docs/cci/azure-cci.mdx +++ b/src/i18n/content/pt/docs/cci/azure-cci.mdx @@ -395,7 +395,7 @@ Antes de conectar Azure à Inteligência de Custos na Nuvem, certifique-se de te - Insira o caminho base onde os dados de cobrança são armazenados no contêiner (por exemplo, `20251001-20251031` para outubro de 2025). **Observação**: se a exportação de faturamento for publicada diretamente na raiz do contêiner, deixe este campo vazio. + Insira o caminho relativo no contêiner de onde você vê as quedas de dados de faturamento em um formato mensal (por exemplo, `20251101-20251130` para novembro de 2025 ou `20251201-20251231` para dezembro de 2025). **Observação**: Se sua exportação de faturamento for publicada diretamente na raiz do contêiner, deixe este campo vazio. diff --git a/src/i18n/content/pt/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx b/src/i18n/content/pt/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx index 25ef568beed..70178f5087b 100644 --- a/src/i18n/content/pt/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx +++ b/src/i18n/content/pt/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx @@ -26,28 +26,26 @@ Nossa integração com o Snowflake permite que você colete dados abrangentes so ## Configurar métrica do Snowflake - Execute o comando abaixo para armazenar a métrica do Snowflake no formato JSON, permitindo que o nri-flex a leia. Certifique-se de modificar ACCOUNT, USERNAME e SNOWSQL\_PWD adequadamente. + Execute o comando abaixo para armazenar as métricas do Snowflake em formato JSON, permitindo que o nri-flex as leia. Certifique-se de modificar `ACCOUNT`, `USERNAME` e `SNOWSQL_PWD` de acordo. ```shell - - # Run the below command as a 1 minute cronjob - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SERVICE_TYPE, NAME, AVG(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_AVERAGE", SUM(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_SUM", AVG(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_AVERAGE", SUM(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_SUM", AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."METERING_HISTORY" WHERE start_time >= DATE_TRUNC(day, CURRENT_DATE()) GROUP BY 1, 2;' > /tmp/snowflake-account-metering.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, AVG(AVG_RUNNING) AS "RUNNING_AVERAGE", AVG(AVG_QUEUED_LOAD) AS "QUEUED_LOAD_AVERAGE", AVG(AVG_QUEUED_PROVISIONING) AS "QUEUED_PROVISIONING_AVERAGE", AVG(AVG_BLOCKED) AS "BLOCKED_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_LOAD_HISTORY" GROUP BY 1;' > /tmp/snowflake-warehouse-load-history-metrics.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, avg(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_AVERAGE", sum(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_SUM", avg(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_AVERAGE", sum(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_SUM", avg(CREDITS_USED) as "CREDITS_USED_AVERAGE", sum(CREDITS_USED) as "CREDITS_USED_SUM" from "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" group by 1;' > /tmp/snowflake-warehouse-metering.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT table_name, table_schema, avg(ACTIVE_BYTES) as "ACTIVE_BYTES_AVERAGE", avg(TIME_TRAVEL_BYTES) as "TIME_TRAVEL_BYTES_AVERAGE", avg(FAILSAFE_BYTES) as "FAILSAFE_BYTES_AVERAGE", avg(RETAINED_FOR_CLONE_BYTES) as "RETAINED_FOR_CLONE_BYTES_AVERAGE" from "SNOWFLAKE"."ACCOUNT_USAGE"."TABLE_STORAGE_METRICS" group by 1, 2;' > /tmp/snowflake-table-storage-metrics.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT STORAGE_BYTES, STAGE_BYTES, FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STORAGE_USAGE" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-storage-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT USAGE_DATE, AVG(AVERAGE_STAGE_BYTES) FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STAGE_STORAGE_USAGE_HISTORY" GROUP BY USAGE_DATE;' > /tmp/snowflake-stage-storage-usage-history.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."REPLICATION_USAGE_HISTORY" GROUP BY DATABASE_NAME;' > /tmp/snowflake-replication-usage-history.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(EXECUTION_TIME) AS "EXECUTION_TIME_AVERAGE", AVG(COMPILATION_TIME) AS "COMPILATION_TIME_AVERAGE", AVG(BYTES_SCANNED) AS "BYTES_SCANNED_AVERAGE", AVG(BYTES_WRITTEN) AS "BYTES_WRITTEN_AVERAGE", AVG(BYTES_DELETED) AS "BYTES_DELETED_AVERAGE", AVG(BYTES_SPILLED_TO_LOCAL_STORAGE) AS "BYTES_SPILLED_TO_LOCAL_STORAGE_AVERAGE", AVG(BYTES_SPILLED_TO_REMOTE_STORAGE) AS "BYTES_SPILLED_TO_REMOTE_STORAGE_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" GROUP BY QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME;' > /tmp/snowflake-query-history.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT PIPE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_INSERTED) AS "BYTES_INSERTED_AVERAGE", SUM(BYTES_INSERTED) AS "BYTES_INSERTED_SUM", AVG(FILES_INSERTED) AS "FILES_INSERTED_AVERAGE", SUM(FILES_INSERTED) AS "FILES_INSERTED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."PIPE_USAGE_HISTORY" GROUP BY PIPE_NAME;' > /tmp/snowflake-pipe-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_ID, QUERY_TEXT, (EXECUTION_TIME / 60000) AS EXEC_TIME, WAREHOUSE_NAME, USER_NAME, EXECUTION_STATUS FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" WHERE EXECUTION_STATUS = '\''SUCCESS'\'' ORDER BY EXECUTION_TIME DESC;' > /tmp/snowflake-longest-queries.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT EVENT_ID, EVENT_TIMESTAMP, EVENT_TYPE, REPORTED_CLIENT_TYPE, REPORTED_CLIENT_VERSION, FIRST_AUTHENTICATION_FACTOR, SECOND_AUTHENTICATION_FACTOR, IS_SUCCESS, ERROR_CODE, ERROR_MESSAGE FROM "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY" WHERE IS_SUCCESS = '\''NO'\'';' > /tmp/snowflake-login-failures.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVERAGE_DATABASE_BYTES, AVERAGE_FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATABASE_STORAGE_USAGE_HISTORY" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-database-storage-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SOURCE_CLOUD, SOURCE_REGION, TARGET_CLOUD, TARGET_REGION, TRANSFER_TYPE, AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATA_TRANSFER_HISTORY" GROUP BY 1, 2, 3, 4, 5;' > /tmp/snowflake-data-transfer-usage.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, SUM(CREDITS_USED) AS TOTAL_CREDITS_USED FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" GROUP BY 1 ORDER BY 2 DESC;' > /tmp/snowflake-credit-usage-by-warehouse.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT TABLE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_AVERAGE", SUM(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_SUM", AVG(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_AVERAGE", SUM(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."AUTOMATIC_CLUSTERING_HISTORY" GROUP BY 1, 2, 3;' > /tmp/snowflake-automatic-clustering.json - SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'select USER_NAME,EVENT_TYPE,IS_SUCCESS,ERROR_CODE,ERROR_MESSAGE,FIRST_AUTHENTICATION_FACTOR,SECOND_AUTHENTICATION_FACTOR from "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY";' > /tmp/snowflake-account-details.json - + # Run the below command as a 1 minute cronjob + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SERVICE_TYPE, NAME, AVG(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_AVERAGE", SUM(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_SUM", AVG(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_AVERAGE", SUM(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_SUM", AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."METERING_HISTORY" WHERE start_time >= DATE_TRUNC(day, CURRENT_DATE()) GROUP BY 1, 2;' > /tmp/snowflake-account-metering.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, AVG(AVG_RUNNING) AS "RUNNING_AVERAGE", AVG(AVG_QUEUED_LOAD) AS "QUEUED_LOAD_AVERAGE", AVG(AVG_QUEUED_PROVISIONING) AS "QUEUED_PROVISIONING_AVERAGE", AVG(AVG_BLOCKED) AS "BLOCKED_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_LOAD_HISTORY" GROUP BY 1;' > /tmp/snowflake-warehouse-load-history-metrics.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, avg(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_AVERAGE", sum(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_SUM", avg(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_AVERAGE", sum(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_SUM", avg(CREDITS_USED) as "CREDITS_USED_AVERAGE", sum(CREDITS_USED) as "CREDITS_USED_SUM" from "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" group by 1;' > /tmp/snowflake-warehouse-metering.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT table_name, table_schema, avg(ACTIVE_BYTES) as "ACTIVE_BYTES_AVERAGE", avg(TIME_TRAVEL_BYTES) as "TIME_TRAVEL_BYTES_AVERAGE", avg(FAILSAFE_BYTES) as "FAILSAFE_BYTES_AVERAGE", avg(RETAINED_FOR_CLONE_BYTES) as "RETAINED_FOR_CLONE_BYTES_AVERAGE" from "SNOWFLAKE"."ACCOUNT_USAGE"."TABLE_STORAGE_METRICS" group by 1, 2;' > /tmp/snowflake-table-storage-metrics.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT STORAGE_BYTES, STAGE_BYTES, FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STORAGE_USAGE" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-storage-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT USAGE_DATE, AVG(AVERAGE_STAGE_BYTES) FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STAGE_STORAGE_USAGE_HISTORY" GROUP BY USAGE_DATE;' > /tmp/snowflake-stage-storage-usage-history.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."REPLICATION_USAGE_HISTORY" GROUP BY DATABASE_NAME;' > /tmp/snowflake-replication-usage-history.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(EXECUTION_TIME) AS "EXECUTION_TIME_AVERAGE", AVG(COMPILATION_TIME) AS "COMPILATION_TIME_AVERAGE", AVG(BYTES_SCANNED) AS "BYTES_SCANNED_AVERAGE", AVG(BYTES_WRITTEN) AS "BYTES_WRITTEN_AVERAGE", AVG(BYTES_DELETED) AS "BYTES_DELETED_AVERAGE", AVG(BYTES_SPILLED_TO_LOCAL_STORAGE) AS "BYTES_SPILLED_TO_LOCAL_STORAGE_AVERAGE", AVG(BYTES_SPILLED_TO_REMOTE_STORAGE) AS "BYTES_SPILLED_TO_REMOTE_STORAGE_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" GROUP BY QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME;' > /tmp/snowflake-query-history.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT PIPE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_INSERTED) AS "BYTES_INSERTED_AVERAGE", SUM(BYTES_INSERTED) AS "BYTES_INSERTED_SUM", AVG(FILES_INSERTED) AS "FILES_INSERTED_AVERAGE", SUM(FILES_INSERTED) AS "FILES_INSERTED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."PIPE_USAGE_HISTORY" GROUP BY PIPE_NAME;' > /tmp/snowflake-pipe-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_ID, QUERY_TEXT, (EXECUTION_TIME / 60000) AS EXEC_TIME, WAREHOUSE_NAME, USER_NAME, EXECUTION_STATUS FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" WHERE EXECUTION_STATUS = '\''SUCCESS'\'' ORDER BY EXECUTION_TIME DESC;' > /tmp/snowflake-longest-queries.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT EVENT_ID, EVENT_TIMESTAMP, EVENT_TYPE, REPORTED_CLIENT_TYPE, REPORTED_CLIENT_VERSION, FIRST_AUTHENTICATION_FACTOR, SECOND_AUTHENTICATION_FACTOR, IS_SUCCESS, ERROR_CODE, ERROR_MESSAGE FROM "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY" WHERE IS_SUCCESS = '\''NO'\'';' > /tmp/snowflake-login-failures.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVERAGE_DATABASE_BYTES, AVERAGE_FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATABASE_STORAGE_USAGE_HISTORY" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-database-storage-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SOURCE_CLOUD, SOURCE_REGION, TARGET_CLOUD, TARGET_REGION, TRANSFER_TYPE, AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATA_TRANSFER_HISTORY" GROUP BY 1, 2, 3, 4, 5;' > /tmp/snowflake-data-transfer-usage.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, SUM(CREDITS_USED) AS TOTAL_CREDITS_USED FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" GROUP BY 1 ORDER BY 2 DESC;' > /tmp/snowflake-credit-usage-by-warehouse.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT TABLE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_AVERAGE", SUM(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_SUM", AVG(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_AVERAGE", SUM(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."AUTOMATIC_CLUSTERING_HISTORY" GROUP BY 1, 2, 3;' > /tmp/snowflake-automatic-clustering.json + SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'select USER_NAME,EVENT_TYPE,IS_SUCCESS,ERROR_CODE,ERROR_MESSAGE,FIRST_AUTHENTICATION_FACTOR,SECOND_AUTHENTICATION_FACTOR from "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY";' > /tmp/snowflake-account-details.json ``` @@ -59,130 +57,126 @@ Nossa integração com o Snowflake permite que você colete dados abrangentes so 1. Crie um arquivo chamado `nri-snowflake-config.yml` no diretório integração: ```shell - - touch /etc/newrelic-infra/integrations.d/nri-snowflake-config.yml - + touch /etc/newrelic-infra/integrations.d/nri-snowflake-config.yml ``` 2. Adicione o trecho a seguir ao arquivo `nri-snowflake-config.yml` para permitir que o agente capture dados do Snowflake: ```yml - - --- - integrations: - - name: nri-flex - interval: 30s - config: - name: snowflakeAccountMetering - apis: - - name: snowflakeAccountMetering - file: /tmp/snowflake-account-metering.json - - name: nri-flex - interval: 30s - config: - name: snowflakeWarehouseLoadHistory - apis: - - name: snowflakeWarehouseLoadHistory - file: /tmp/snowflake-warehouse-load-history-metrics.json - - name: nri-flex - interval: 30s - config: - name: snowflakeWarehouseMetering - apis: - - name: snowflakeWarehouseMetering - file: /tmp/snowflake-warehouse-metering.json - - name: nri-flex - interval: 30s - config: - name: snowflakeTableStorage - apis: - - name: snowflakeTableStorage - file: /tmp/snowflake-table-storage-metrics.json - - name: nri-flex - interval: 30s - config: - name: snowflakeStageStorageUsage - apis: - - name: snowflakeStageStorageUsage - file: /tmp/snowflake-stage-storage-usage-history.json - - name: nri-flex - interval: 30s - config: - name: snowflakeReplicationUsgae - apis: - - name: snowflakeReplicationUsgae - file: /tmp/snowflake-replication-usage-history.json - - name: nri-flex - interval: 30s - config: - name: snowflakeQueryHistory - apis: - - name: snowflakeQueryHistory - file: /tmp/snowflake-query-history.json - - name: nri-flex - interval: 30s - config: - name: snowflakePipeUsage - apis: - - name: snowflakePipeUsage - file: /tmp/snowflake-pipe-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeLongestQueries - apis: - - name: snowflakeLongestQueries - file: /tmp/snowflake-longest-queries.json - - name: nri-flex - interval: 30s - config: - name: snowflakeLoginFailure - apis: - - name: snowflakeLoginFailure - file: /tmp/snowflake-login-failures.json - - name: nri-flex - interval: 30s - config: - name: snowflakeDatabaseStorageUsage - apis: - - name: snowflakeDatabaseStorageUsage - file: /tmp/snowflake-database-storage-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeDataTransferUsage - apis: - - name: snowflakeDataTransferUsage - file: /tmp/snowflake-data-transfer-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeCreditUsageByWarehouse - apis: - - name: snowflakeCreditUsageByWarehouse - file: /tmp/snowflake-credit-usage-by-warehouse.json - - name: nri-flex - interval: 30s - config: - name: snowflakeAutomaticClustering - apis: - - name: snowflakeAutomaticClustering - file: /tmp/snowflake-automatic-clustering.json - - name: nri-flex - interval: 30s - config: - name: snowflakeStorageUsage - apis: - - name: snowflakeStorageUsage - file: /tmp/snowflake-storage-usage.json - - name: nri-flex - interval: 30s - config: - name: snowflakeAccountDetails - apis: - - name: snowflakeAccountDetails - file: /tmp/snowflake-account-details.json - + --- + integrations: + - name: nri-flex + interval: 30s + config: + name: snowflakeAccountMetering + apis: + - name: snowflakeAccountMetering + file: /tmp/snowflake-account-metering.json + - name: nri-flex + interval: 30s + config: + name: snowflakeWarehouseLoadHistory + apis: + - name: snowflakeWarehouseLoadHistory + file: /tmp/snowflake-warehouse-load-history-metrics.json + - name: nri-flex + interval: 30s + config: + name: snowflakeWarehouseMetering + apis: + - name: snowflakeWarehouseMetering + file: /tmp/snowflake-warehouse-metering.json + - name: nri-flex + interval: 30s + config: + name: snowflakeTableStorage + apis: + - name: snowflakeTableStorage + file: /tmp/snowflake-table-storage-metrics.json + - name: nri-flex + interval: 30s + config: + name: snowflakeStageStorageUsage + apis: + - name: snowflakeStageStorageUsage + file: /tmp/snowflake-stage-storage-usage-history.json + - name: nri-flex + interval: 30s + config: + name: snowflakeReplicationUsgae + apis: + - name: snowflakeReplicationUsgae + file: /tmp/snowflake-replication-usage-history.json + - name: nri-flex + interval: 30s + config: + name: snowflakeQueryHistory + apis: + - name: snowflakeQueryHistory + file: /tmp/snowflake-query-history.json + - name: nri-flex + interval: 30s + config: + name: snowflakePipeUsage + apis: + - name: snowflakePipeUsage + file: /tmp/snowflake-pipe-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeLongestQueries + apis: + - name: snowflakeLongestQueries + file: /tmp/snowflake-longest-queries.json + - name: nri-flex + interval: 30s + config: + name: snowflakeLoginFailure + apis: + - name: snowflakeLoginFailure + file: /tmp/snowflake-login-failures.json + - name: nri-flex + interval: 30s + config: + name: snowflakeDatabaseStorageUsage + apis: + - name: snowflakeDatabaseStorageUsage + file: /tmp/snowflake-database-storage-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeDataTransferUsage + apis: + - name: snowflakeDataTransferUsage + file: /tmp/snowflake-data-transfer-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeCreditUsageByWarehouse + apis: + - name: snowflakeCreditUsageByWarehouse + file: /tmp/snowflake-credit-usage-by-warehouse.json + - name: nri-flex + interval: 30s + config: + name: snowflakeAutomaticClustering + apis: + - name: snowflakeAutomaticClustering + file: /tmp/snowflake-automatic-clustering.json + - name: nri-flex + interval: 30s + config: + name: snowflakeStorageUsage + apis: + - name: snowflakeStorageUsage + file: /tmp/snowflake-storage-usage.json + - name: nri-flex + interval: 30s + config: + name: snowflakeAccountDetails + apis: + - name: snowflakeAccountDetails + file: /tmp/snowflake-account-details.json ``` @@ -192,9 +186,7 @@ Nossa integração com o Snowflake permite que você colete dados abrangentes so Reinicie seu agente de infraestrutura. ```shell - sudo systemctl restart newrelic-infra.service - ``` Em alguns minutos, seu aplicativo enviará métricas para [one.newrelic.com](https://one.newrelic.com). @@ -215,9 +207,7 @@ Nossa integração com o Snowflake permite que você colete dados abrangentes so Aqui está uma consulta NRQL para verificar a métrica do Snowflake: ```sql - - SELECT * from snowflakeAccountSample - + SELECT * FROM snowflakeAccountSample ``` diff --git a/src/i18n/content/pt/docs/logs/forward-logs/azure-log-forwarding.mdx b/src/i18n/content/pt/docs/logs/forward-logs/azure-log-forwarding.mdx index b92488a0934..087b8dda575 100644 --- a/src/i18n/content/pt/docs/logs/forward-logs/azure-log-forwarding.mdx +++ b/src/i18n/content/pt/docs/logs/forward-logs/azure-log-forwarding.mdx @@ -36,19 +36,80 @@ Para enviar o log do seu hub de eventos: Siga esses passos: 1. Certifique-se de ter um . + 2. Em **[one.newrelic.com](https://one.newrelic.com/launcher/logger.log-launcher)**, clique em **Integrations & Agents** na navegação esquerda. + 3. Na categoria **Logging** , clique no bloco **Microsoft Azure Event Hub** na lista de fontes de dados. + 4. Selecione a conta para a qual deseja enviar o registro e clique em **Continue**. + 5. Clique em **Generate API key** e copie a chave de API gerada. + 6. Clique em **Deploy to Azure** e uma nova aba será aberta com o modelo ARM carregado no Azure. + 7. Selecione o **Resource group** onde deseja criar os recursos necessários e um **Region**. Apesar de não ser obrigatório, recomendamos instalar o modelo em um novo grupo de recursos, para evitar a exclusão acidental de qualquer um dos componentes que ele cria. + 8. No campo **New Relic license key** , cole a chave de API copiada anteriormente. + 9. Certifique-se de que o [endpoint do New Relic](/docs/logs/log-api/introduction-log-api/#endpoint) esteja definido como aquele correspondente à sua conta. -10. Opcional: defina como `true` os [logs de atividades de assinatura do Azure](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log) que você deseja encaminhar. Consulte [as informações de assinatura](#subscription-activity-logs) neste documento para obter mais detalhes. -11. Clique em **Review + create**, revise os dados inseridos e clique em **Create**. + +10. Selecione o modo de escala. O valor padrão é `Basic`. + +11. Opcional: Configure o parâmetro de agrupamento do EventHub (disponível na versão 2.8.0 ou superior) para otimizar o desempenho: + + * **Tamanho máximo do lote de eventos**: Número máximo de eventos por lote (padrão: 500, mínimo: 1) + * **Tamanho mínimo do lote de eventos**: Número mínimo de eventos por lote (padrão: 20, mínimo: 1) + * **Tempo máximo de espera**: Tempo máximo de espera para construir um lote no formato HH:MM:SS (padrão: 00:00:30) + +12. Opcional: defina como `true` os [logs de atividades de assinatura do Azure](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log) que você deseja encaminhar. Consulte [as informações de assinatura](#subscription-activity-logs) neste documento para obter mais detalhes. + +13. Clique em **Review + create**, revise os dados inseridos e clique em **Create**. Observe que o modelo é idempotente. Você pode iniciar o encaminhamento do log do Event Hub e, em seguida, executar novamente o mesmo modelo para configurar o encaminhamento [do log de atividades do Azure assinatura](#subscription-activity-logs) , concluindo a etapa 10. +### Configure o agrupamento e o dimensionamento do EventHub (opcional) [#eventhub-configuration] + +A partir da versão 2.8.0, o modelo ARM oferece suporte a opções avançadas de configuração do EventHub para otimizar o desempenho e as taxas de transferência: + +**Parâmetro de lote do gatilho do EventHub:** + +Você pode configurar o comportamento de agrupamento para controlar como os eventos são processados. Essas configurações são definidas como configurações do aplicativo Azure Functions: + +* **Tamanho máximo do lote de eventos** : Número máximo de eventos entregues em um lote à função (padrão: 500, mínimo: 1). Isso controla o limite máximo de eventos processados em conjunto. + +* **Tamanho mínimo do lote de eventos** : Número mínimo de eventos entregues em um lote para a função (padrão: 20, mínimo: 1). A função aguardará até acumular pelo menos esse número de eventos antes de processá-los, a menos que o tempo máximo de espera seja atingido. + +* **Tempo máximo de espera** : Tempo máximo de espera para a formação de um lote antes de enviá-lo para a função (padrão: 00:00:30, formato: HH:MM:SS). Isso garante o processamento em tempo hábil mesmo quando o volume de eventos é baixo. + +Esses parâmetros ajudam a otimizar as taxas de transferência e a utilização de recursos com base no volume de logs e nos requisitos de processamento. Ajuste esses valores de acordo com o seu caso de uso específico: + +* Aumente o tamanho dos lotes para cenários de alto volume para melhorar as taxas de transferência +* Diminua o tamanho dos lotes para atender aos requisitos de baixa latência. +* Ajuste o tempo de espera para equilibrar a latência e a eficiência do processamento em lote. + +**Dimensionando a configuração (v2.7.0+):** + +O modelo permite configurar o modo de dimensionamento Azure Functions, possibilitando otimizar custos e desempenho com base na sua workload: + +* **Modo de dimensionamento básico**: usa um plano baseado no consumo de SKU dinâmico (nível Y1) por padrão, onde Azure adiciona e remove automaticamente instâncias de funções com base no número de eventos recebidos. + + * Se a opção `disablePublicAccessToStorageAccount` estiver ativada, será utilizado um plano SKU Básico (nível B1) para suportar a integração com a VNet. + * Este modo é ideal para cargas de trabalho variáveis e proporciona otimização automática de custos com preços por execução. + * O namespace EventHub inclui 4 partições com escala de unidade de taxas de transferência padrão. + +* **Modo de escalonamento empresarial**: Oferece recursos avançados de escalonamento com recursos de computação dedicados e maior controle sobre o escalonamento instantâneo. Este modo oferece: + + * Funcionalidade de dimensionamento automático para o Aplicativo de Funções e o Hub de Eventos. + * Plano de hospedagem Elastic Premium (EP1) com escalonamento por site ativado. + * Inflação automática do EventHub habilitada com taxas máximas de transferência de unidades de 40 + * Aumento do número de partições (32 partições em vez de 4 no modo Básico) para melhor paralelismo. + * Desempenho previsível e menor latência com instância pré-aquecida + * Mais adequado para cenários de encaminhamento de logs de alto volume e missão crítica. + +**Observações importantes:** + +* Ao atualizar do modo Básico para o modo Empresarial, você precisará provisionar novamente o EventHub devido à limitação do Azure de que uma SKU Standard não pode alterar a quantidade de partições após a criação. + ### Opcional: envie o log de atividades do Azure da sua assinatura [#subscription-activity-logs] diff --git a/src/i18n/content/pt/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx b/src/i18n/content/pt/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx index 81f6ecdcc1e..ee02d050931 100644 --- a/src/i18n/content/pt/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx +++ b/src/i18n/content/pt/docs/mobile-monitoring/mobile-monitoring-ui/application-not-responding.mdx @@ -95,7 +95,11 @@ Quando ocorre um erro ANR, o Android captura um stack trace. Um stack trace é u **Desofuscação:** -Atualmente, New Relic não desofusca o rastreamento stack ANR automaticamente na plataforma. O suporte para esse recurso está planejado para uma versão futura. Enquanto isso, você pode baixar o stack trace ANR ofuscado do New Relic e então usar ferramentas offline, como o utilitário `ndk-stack` ou `retrace` do Proguard/R8, para simbolizar o stack trace manualmente. +O New Relic simboliza automaticamente os frames de pilha Java em rastreamentos de pilha ANR, fornecendo nomes de métodos e números de linha legíveis diretamente na plataforma. + + + Frames de pilha nativos (NDK) não são simbolizados atualmente. Para frames de pilha nativos, você pode baixar o rastreamento de pilha do New Relic e usar ferramentas offline como `ndk-stack` para simbolizar manualmente. + ## Desativar monitoramento ANR [#disable-anr-monitoring] diff --git a/src/i18n/content/pt/docs/release-notes/docs-release-notes/docs-12-19-2025.mdx b/src/i18n/content/pt/docs/release-notes/docs-release-notes/docs-12-19-2025.mdx new file mode 100644 index 00000000000..e5fdd9c73ef --- /dev/null +++ b/src/i18n/content/pt/docs/release-notes/docs-release-notes/docs-12-19-2025.mdx @@ -0,0 +1,79 @@ +--- +subject: Docs +releaseDate: '2025-12-19' +version: 'December 15 - December 19, 2025' +translationType: machine +--- + +### Novos documentos + +* Adicionado [Impacto do usuário](/docs/browser/new-relic-browser/browser-pro-features/user-impact) para fornecer orientação abrangente para entender os sinais de frustração e o impacto no desempenho na experiência do usuário. + +### Grandes mudanças + +* Atualizado o [Catálogo de ações](/docs/workflow-automation/setup-and-configuration/actions-catalog) com reestruturação e organização extensivas das ações do fluxo de trabalho. +* Atualizado [Logs do Browser: Comece a usar](/docs/browser/browser-monitoring/browser-pro-features/browser-logs/get-started) com atualizações de captura de log automática e manual. +* Atualizado [Visualizações de página: Examine o desempenho da página](/docs/browser/new-relic-browser/browser-pro-features/page-views-examine-page-performance) com sinais de frustração e informações sobre o impacto no desempenho. +* Adicionado [Referência de provedores de dados](/docs/sap-solutions/additional-resources/data-providers-reference) para fornecer orientação detalhada para provedores de dados de soluções SAP. + +### Pequenas mudanças + +* Adicionada documentação de configuração do filtro eBPF à [Instalação da Observabilidade de Rede eBPF no Kubernetes](/docs/ebpf/k8s-installation) e [Instalação da Observabilidade de Rede eBPF no Linux](/docs/ebpf/linux-installation). +* Atualizado [Agentic AI: Configuração do Protocolo de Contexto do Modelo](/docs/agentic-ai/mcp/setup) com instruções de configuração aprimoradas. +* Atualizado [Compatibilidade e requisitos do agente PHP](/docs/apm/agents/php-agent/getting-started/php-agent-compatibility-requirements) com Kinesis Data Streams e Drupal 11.1/11.2. compatibilidade. +* Atualizado [Compatibilidade e requisitos do agente .NET](/docs/apm/agents/net-agent/getting-started/net-agent-compatibility-requirements) com as versões compatíveis mais recentes verificadas para dependências. +* Atualizado [Compatibilidade e requisitos do agente Node.js](/docs/apm/agents/nodejs-agent/getting-started/compatibility-requirements-nodejs-agent) com o relatório de compatibilidade mais recente. +* Compatibilidade e requisitos do [agente Java](/docs/apm/agents/java-agent/getting-started/compatibility-requirements-java-agent) atualizados com informações de compatibilidade atuais. +* Aprimorado [Instrumente a função AWS Lambda com Python](/docs/serverless-function-monitoring/azure-function-monitoring/container) com comando de instalação explícito para funções do Azure em contêiner. +* Atualizado [Monitoramento de fluxo de rede](/docs/network-performance-monitoring/setup-performance-monitoring/network-flow-monitoring) com suporte à versão Ubuntu mais recente para kTranslate. +* Atualizado [Atualização do Lambda para a experiência APM](/docs/serverless-function-monitoring/aws-lambda-monitoring/instrument-lambda-function/upgrade-to-apm-experience) para refletir o novo suporte à função de contêiner. +* Adicionadas publicações "Novidades" para: + * [Transaction 360](/whats-new/2025/12/whats-new-12-15-transaction-360) + +### Notas de versão + +* Fique por dentro dos nossos últimos lançamentos: + + * [Agente PHP v12.3.0.28](/docs/release-notes/agent-release-notes/php-release-notes/php-agent-12-3-0-28): + + * Adicionada instrumentação aws-sdk-php Kinesis Data Streams. + * Corrigido problema em que o daemon não limpava o cache de pacotes na reinicialização. + * Versão do golang atualizada para 1.25.5. + + * [Agente Node.js v13.8.1](/docs/release-notes/agent-release-notes/nodejs-release-notes/node-agent-13-8-1): + * Atualizada a instrumentação do AWS Lambda para pular o encapsulamento do callback do manipulador se não estiver presente. + + * [Agente Java v8.25.1](/docs/release-notes/agent-release-notes/java-release-notes/java-agent-8251): + * Corrigido erro de Kotlin Coroutine sobre a implementação de terceiros de `CancellableContinuation`. + + * [Agente de Browser v1.306.0](/docs/release-notes/new-relic-browser-release-notes/browser-agent-release-notes/browser-agent-v1.306.0): + + * Adicionado controle para a API de log por meio de uma flag RUM separada. + * Validação aprimorada para responseStart antes de confiar em onTTFB. + * Removida a sintaxe de quebra de linha da saída do webpack. + + * [Integração do Kubernetes v3.51.1](/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-1): + * Lançado com as versões do gráfico newrelic-infrastructure-3.56.1 e nri-bundle-6.0.30. + + * [NRDOT v1.7.0](/docs/release-notes/nrdot-release-notes/nrdot-2025-12-15): + * Adicionados componentes ohi à distribuição nrdot-collector-experimental. + + * [NRDOT v1.6.0](/docs/release-notes/nrdot-release-notes/nrdot-2025-12-12): + + * Atualizou as versões dos componentes otel de v0.135.0 para v0.141.0. + * Corrigido CVE-2025-61729, atualizando para golang 1.24.11. + * Abordada a descontinuação da configuração do transformprocessor de 0.119.0. + + * [Lançamento do Job Manager 493](/docs/release-notes/synthetics-release-notes/job-manager-release-notes/job-manager-release-493): + + * Corrigido o problema de compatibilidade com o Docker 29 causado pela versão mínima da API atualizada para 1.44. + * Adicionada a máscara de dados para informações confidenciais para cobrir os resultados de trabalhos com falha. + + * [Node Browser Runtime rc1.5](/docs/release-notes/synthetics-release-notes/node-browser-runtime-release-notes/node-browser-runtime-rc1.5): + * Atualização de lançamento com as últimas alterações. + + * [Node API Runtime rc1.5](/docs/release-notes/synthetics-release-notes/node-api-runtime-release-notes/node-api-runtime-rc1.5): + * Atualização de lançamento com as últimas alterações. + + * [Node Browser Runtime rc1.6](/docs/release-notes/synthetics-release-notes/node-browser-runtime-release-notes/node-browser-runtime-rc1.6): + * Atualização de lançamento com as últimas alterações. \ No newline at end of file diff --git a/src/i18n/content/pt/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-2.mdx b/src/i18n/content/pt/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-2.mdx new file mode 100644 index 00000000000..3e92c7b8bfa --- /dev/null +++ b/src/i18n/content/pt/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-3-51-2.mdx @@ -0,0 +1,13 @@ +--- +subject: Kubernetes integration +releaseDate: '2025-12-23' +version: 3.51.2 +translationType: machine +--- + +Para uma descrição detalhada das mudanças, consulte as [notas de lançamento](https://github.com/newrelic/nri-kubernetes/releases/tag/v3.51.2). + +Essa integração está incluída nas seguintes versões de gráfico: + +* [newrelic-infrastructure-3.56.2](https://github.com/newrelic/nri-kubernetes/releases/tag/newrelic-infrastructure-3.56.2) +* [nri-bundle-6.0.31](https://github.com/newrelic/helm-charts/releases/tag/nri-bundle-6.0.31) \ No newline at end of file diff --git a/src/i18n/content/pt/docs/release-notes/nrdot-release-notes/nrdot-2025-12-19.mdx b/src/i18n/content/pt/docs/release-notes/nrdot-release-notes/nrdot-2025-12-19.mdx new file mode 100644 index 00000000000..92dd4dcb172 --- /dev/null +++ b/src/i18n/content/pt/docs/release-notes/nrdot-release-notes/nrdot-2025-12-19.mdx @@ -0,0 +1,17 @@ +--- +subject: NRDOT +releaseDate: '2025-12-19' +version: 1.8.0 +metaDescription: Release notes for NRDOT Collector version 1.8.0 +translationType: machine +--- + +## Registro de alterações + +### Recurso + +* feat: Atualiza as versões dos componentes otel de v0.141.0 para v0.142.0 (#464) + +### Correções de bugs + +* fix: Força expr-lang/expr:1.17.7 para corrigir CVE-2025-68156 (#468) \ No newline at end of file diff --git a/src/i18n/content/pt/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx b/src/i18n/content/pt/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx index be6efee1604..63e4ea6e552 100644 --- a/src/i18n/content/pt/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx +++ b/src/i18n/content/pt/docs/security/new-relic-security/security-bulletins/security-bulletin-nr25-02.mdx @@ -36,7 +36,7 @@ A New Relic recomenda enfaticamente que seus clientes que utilizam a instrumenta - diff --git a/src/i18n/content/pt/docs/service-architecture-intelligence/github-integrations/github-cloud-integration.mdx b/src/i18n/content/pt/docs/service-architecture-intelligence/github-integrations/github-cloud-integration.mdx new file mode 100644 index 00000000000..d56ea63b497 --- /dev/null +++ b/src/i18n/content/pt/docs/service-architecture-intelligence/github-integrations/github-cloud-integration.mdx @@ -0,0 +1,169 @@ +--- +title: Inteligência de Arquitetura de Serviço com integração na nuvem do GitHub +tags: + - New Relic integrations + - GitHub integration +metaDescription: 'Learn how to integrate GitHub with New Relic to import repositories, teams, and user data for enhanced service architecture intelligence.' +freshnessValidatedDate: never +translationType: machine +--- + +A integração GitHub aprimora a Inteligência de Arquitetura de Serviços enriquecendo seus dados New Relic com o contexto de sua organização GitHub. Ao conectar sua conta do GitHub, você pode importar seus dados de repositório, equipes e pull request para o New Relic. Essas informações adicionais reforçam o valor das [Equipes](/docs/service-architecture-intelligence/teams/teams), [Catálogos](/docs/service-architecture-intelligence/catalogs/catalogs) e [Painéis de Avaliação](/docs/service-architecture-intelligence/scorecards/getting-started), proporcionando uma visão mais completa e integrada do seu trabalho de engenharia. + +## Antes de você começar + +**Pré-requisitos:** + +* Você deve ter a função de Gerente de organização ou Gerente de domínio de autenticação. + +**Plataforma suportada:** + +* Nuvem GitHub +* GitHub Enterprise Cloud (sem residência de dados) + +**Regiões suportadas:** regiões dos EUA e UE + + + * O GitHub Enterprise Server e o GitHub Enterprise Cloud com residência de dados não são suportados. + * A instalação da integração em contas de usuário do GitHub não é suportada. Embora o GitHub permita a instalação do aplicativo no nível do usuário, o processo de sincronização não funcionará e nenhum dado será importado para o New Relic. + * A integração com o GitHub não está em conformidade com o FedRAMP. + + +## Quais dados podem ser sincronizados + +A integração com o GitHub permite que você escolha seletivamente quais tipos de dados importar para o New Relic, dando a você controle sobre quais informações são sincronizadas: + +### Tipos de dados disponíveis + +* **Repositório e pull request**: Importe dados de repositório e pull request para melhor visibilidade do código e rastreamento de implantação + +* **Equipes**: Importe equipes do GitHub e seus membros para aprimorar o gerenciamento de equipes e o mapeamento de responsabilidades. + + + **Conflitos de integração de equipes**: Se as equipes já tiverem sido integradas ao New Relic a partir de outra fonte (como Okta ou outro provedor de identidade), as equipes do GitHub não poderão ser buscadas e armazenadas para evitar conflitos de dados. Neste caso, você só pode selecionar os dados da solicitação de pull request.\ + **Requisito de visibilidade de e-mail do usuário**: Para garantir que a participação na equipe esteja alinhada com suas equipes do GitHub, os usuários do GitHub precisam ter configurado seus endereços de e-mail como públicos nas configurações de seus perfis do GitHub. Os membros da equipe com configuração de e-mail privada serão excluídos do processo de sincronização de dados do usuário. + + +## Configurar a integração do GitHub + +1. Acesse **[one.newrelic.com > + Integration & Agents > GitHub integration](https://one.newrelic.com/marketplace/install-data-source?state=9306060d-b674-b245-083e-ff8d42765e0d)**. + +2. Selecione a conta na qual você deseja configurar a integração. + +3. Selecione **Set up a new integration** e clique em **Continue**. + +4. Na tela **Begin integration** : + + a. Clique em **Get started in GitHub** para conectar sua conta. O aplicativo de observabilidade New Relic abre no GitHub Marketplace. + + b. Conclua a instalação do aplicativo em sua organização do GitHub. Após a instalação, você será redirecionado de volta para a interface do New Relic. + + c. Selecione **Begin integration** novamente e clique em **Continue**. + + d. **Select your data preferences**: Escolha quais tipos de dados você deseja sincronizar: + + * **Teams + Users**: importe estruturas de equipe do GitHub e informações do usuário. + * **Repositories + Pull Requests**: Importa dados de repositório e pull request. + * **Both**: Importar todos os tipos de dados disponíveis. + + e. Se você selecionou **Teams + Users**, será exibida uma lista de todas as equipes do GitHub. Selecione todas as equipes ou uma seleção delas para importar. + + f. Clique em **Start first sync** para começar a importar os dados selecionados. + + g. Depois de visualizar a mensagem **Sync started** , clique em **Continue**. A tela **Integration status** exibirá a contagem dos tipos de dados selecionados (equipes, repositório, etc.), atualizando a cada 5 segundos. Aguarde alguns minutos para a importação completa de todos os dados. + + GitHub integration + +5. *(Opcional)* Na tela **GitHub integration**, você pode acessar seus dados importados: + + * Clique em **Go to Teams** para visualizar as equipes importadas na página [Teams](/docs/service-architecture-intelligence/teams/teams) (caso as equipes tenham sido selecionadas durante a configuração). + * Clique em **Go to Repositories** para visualizar as informações do repositório importado no catálogo [Repositories](/docs/service-architecture-intelligence/repositories/repositories) (se o repositório tiver sido selecionado durante a configuração). + +## Gerencie sua integração com o GitHub + +Depois de configurar sua integração com o GitHub, você pode gerenciá-la por meio da interface do New Relic. Isso inclui atualizar dados, editar configurações e desinstalar quando necessário. + +### Gerenciamento de integração de acesso + +1. Acesse **[one.newrelic.com > + Integration & Agents > GitHub integration](https://one.newrelic.com/marketplace/install-data-source?state=9306060d-b674-b245-083e-ff8d42765e0d)**. + +2. Na etapa **Select an action** , selecione **Manage your organization** e clique em **Continue**. + + Screenshot showing the manage organization option in GitHub integration + +A tela **Manage GitHub integration** exibe sua organização conectada com seu status de sincronização atual e tipos de dados. + +### Atualizar dados + +A opção Atualizar dados oferece uma maneira simplificada de atualizar seus dados do GitHub no New Relic. + +**Para atualizar dados:** + +1. Na tela **Manage GitHub integration** , localize sua organização. + +2. Clique em **Refresh data** ao lado da organização que deseja atualizar e, em seguida, clique em **Continue**. + +3. Na etapa **Refresh Data** , clique em **Sync on demand**. + +O sistema validará suas permissões do GitHub e o acesso à organização, buscará apenas dados novos ou alterados desde a última sincronização, processará e mapeará os dados atualizados de acordo com os tipos de dados selecionados e atualizará o status da integração para refletir o registro de data timestamp da sincronização mais recente e as contagens de dados. + +**O que é atualizado:** + +* Equipes e seus membros +* alterações no repositório (novo repositório, repositório arquivado, alterações de permissão) +* Propriedade da equipe atualizada por meio de propriedades personalizadas + + + **Frequência de atualização**: você pode atualizar os dados sempre que necessário. O processo normalmente leva alguns minutos, dependendo do tamanho da sua organização e dos tipos de dados selecionados. + + +### Editar configurações de integração + +Utilize a opção **Edit** para modificar a configuração de integração após a configuração inicial. Você pode ajustar quais tipos de dados são sincronizados entre o GitHub e o New Relic, bem como selecionar quais equipes serão sincronizadas. + +**Para editar a integração do GitHub:** + +1. Na tela **Manage GitHub integration** , localize sua organização. + +2. Clique em **Edit** ao lado da organização que você deseja atualizar e depois clique em **Continue**. + +3. Na etapa **Edit Integration Settings**, ajuste suas seleções conforme necessário. + +4. Clique em **Save changes** para aplicar suas atualizações. + +**O que acontece durante a edição:** + +* Os dados atuais permanecem intactos durante as alterações de configuração. Se a sua seleção de equipes para sincronizar for diferente agora, a seleção anterior não será excluída do New Relic, mas deixará de ser sincronizada com o GitHub. Você pode excluir essas equipes na funcionalidade Equipes. +* Novas configurações se aplicam a sincronizações subsequentes +* Você pode visualizar as alterações antes de aplicá-las +* A integração continua sendo executada com as configurações anteriores até que você salve as alterações + +### Configure a propriedade automática da equipe. + +Você pode atribuir automaticamente o repositório do GitHub às suas equipes adicionando `teamOwningRepo` como uma propriedade personalizada no GitHub. + +Crie a propriedade personalizada no nível da organização e atribua um valor para ela no nível do repositório. Além disso, você pode configurar uma propriedade personalizada para vários repositórios no nível da organização simultaneamente. + +Em seguida, no New Relic Teams, habilite o recurso de propriedade automatizada, certificando-se de usar `team` como chave de tag. + +Depois que isso estiver configurado, associaremos automaticamente cada repositório à sua equipe correta. + +Para obter mais informações sobre como criar propriedades personalizadas, consulte a [documentação do GitHub](https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization). + +### Desinstalar a integração do GitHub + +Desinstalar a integração com o GitHub interrompe a sincronização de dados da organização selecionada. Você terá a opção de preservar ou excluir os dados importados anteriormente no New Relic. + +**Para desinstalar:** + +1. Na tela **Manage GitHub integration** , localize a organização que você deseja desinstalar e clique em **Uninstall**. + +2. Na caixa de diálogo de confirmação, selecione se deseja Manter os dados ou Excluir os dados. + +3. Analise os detalhes e clique em Desinstalar organização para confirmar. + +4. Você verá uma mensagem de sucesso confirmando a desinstalação. + + + **Retenção de dados após a desinstalação**: Os dados mantidos não serão mais sincronizados com o GitHub e poderão ser excluídos manualmente posteriormente na plataforma New Relic (por exemplo, por meio do recurso Teams). + \ No newline at end of file diff --git a/src/i18n/content/pt/docs/service-architecture-intelligence/github-integrations/github-enterprise-integration.mdx b/src/i18n/content/pt/docs/service-architecture-intelligence/github-integrations/github-enterprise-integration.mdx new file mode 100644 index 00000000000..1a95fa53b5d --- /dev/null +++ b/src/i18n/content/pt/docs/service-architecture-intelligence/github-integrations/github-enterprise-integration.mdx @@ -0,0 +1,567 @@ +--- +title: Inteligência da Arquitetura de Serviço com GitHub Enterprise (on-premises) +tags: + - New Relic integrations + - GitHub Enterprise integration +metaDescription: Integrate your on-premise GitHub Enterprise (GHE) environment with New Relic using a secure collector service and GitHub App for automated data ingestion. +freshnessValidatedDate: never +translationType: machine +--- + + + Ainda estamos trabalhando nesse recurso, mas adoraríamos que você experimentasse! + + Atualmente, esse recurso é fornecido como parte de um programa de visualização de acordo com nossas [políticas de pré-lançamento](/docs/licenses/license-information/referenced-policies/new-relic-pre-release-policy). + + +Você está procurando obter insights mais profundos sobre a arquitetura do seu serviço, aproveitando os dados da sua conta GitHub Enterprise no local? A integração do New Relic GitHub Enterprise importa repositórios e equipes diretamente para a plataforma New Relic usando um serviço de coletor seguro implantado em sua rede privada. + +Com o novo recurso de busca seletiva de dados, você pode escolher exatamente quais tipos de dados importar — sejam equipes, repositórios e pull requests, ou ambos. Esta integração visa aprimorar o gerenciamento e a visibilidade de [Equipes](/docs/service-architecture-intelligence/teams/teams), [Catálogos](/docs/service-architecture-intelligence/catalogs/catalogs) e [Scorecards](/docs/service-architecture-intelligence/scorecards/getting-started) dentro do New Relic. Para obter mais informações, consulte o [recurso Service Architecture Intelligence](/docs/service-architecture-intelligence/getting-started). + +**Pré-requisitos** + +* Conta GitHub Enterprise no local com privilégios de administrador da organização. +* Ambiente Docker para executar o serviço de coletor em sua rede GitHub Enterprise. +* Conta New Relic com as permissões apropriadas para criar integrações. + +## Considerações de segurança + +Esta integração segue as melhores práticas de segurança: + +* Usa a autenticação do GitHub App com permissões mínimas necessárias +* Eventos de webhook são autenticados usando chaves secretas +* Toda a transmissão de dados ocorre via HTTPS +* Nenhuma credencial de usuário é armazenada ou transmitida +* Somente dados de repositórios e equipes são importados + +**Para configurar a integração do GitHub Enterprise:** + + + + ## Crie e configure um aplicativo GitHub + + Na sua instância GHE, navegue até **Settings → Developer Settings → GitHub Apps → New GitHub App**. Para obter instruções detalhadas sobre como criar um GitHub App, consulte a [documentação do GitHub sobre como registrar um GitHub App](https://docs.github.com/en/apps/creating-github-apps/registering-a-github-app/registering-a-github-app). + + ### Configurar permissões + + Configure as permissões do aplicativo com precisão para garantir a obtenção de dados perfeita durante a sincronização inicial e a escuta eficiente de eventos de webhook posteriormente. As permissões do aplicativo definem o escopo de acesso que o aplicativo tem a vários recursos de repositório e organização no GitHub. Ao personalizar essas permissões, você pode aprimorar a segurança, garantindo que o aplicativo acesse apenas os dados necessários, minimizando a exposição. A configuração adequada facilita a sincronização inicial de dados e o tratamento confiável de eventos, otimizando a integração do aplicativo com o ecossistema do GitHub. + + Para obter orientações detalhadas sobre as permissões do GitHub App, consulte a [documentação do GitHub sobre como definir permissões para GitHub Apps](https://docs.github.com/en/apps/creating-github-apps/setting-up-a-github-app/choosing-permissions-for-a-github-app). + + #### Permissões de repositório necessárias + + Configure as seguintes permissões no nível do repositório exatamente como mostrado para habilitar a sincronização de dados: + + * **Administração**: Somente leitura ✓ + * **Verificações**: Somente leitura ✓ + * **Status de commit**: Selecionado ✓ + * **Conteúdo**: Selecionado ✓ + * **Propriedades personalizadas**: Selecionado ✓ + * **Implantações**: Somente leitura ✓ + * **Metadados**: Somente leitura (obrigatório) ✓ + * **Pull requests**: Selecionado ✓ + * **Webhooks**: Somente leitura ✓ + + #### Permissões de organização necessárias + + Configure as seguintes permissões no nível da organização exatamente como mostrado: + + * **Administração**: Somente leitura ✓ + * **Funções de organização personalizadas**: Somente leitura ✓ + * **Propriedades personalizadas**: Somente leitura ✓ + * **Funções de repositório personalizadas**: Somente leitura ✓ + * **Eventos**: Somente leitura ✓ + * **Membros**: Somente leitura ✓ + * **Webhooks**: Somente leitura ✓ + + #### Assinaturas de eventos de webhook + + Selecione os seguintes eventos de webhook exatamente como mostrado para sincronização e monitoramento em tempo real: + + **✓ Selecione estes eventos:** + + * `check_run` - Verifique as atualizações do status da execução + * `check_suite` - Conclusão da suíte de verificações + * `commit_comment` - Comentários sobre commits + * `create` - Criação de branch ou tag + * `custom_property` - Alterações de propriedade personalizadas para atribuições de equipe + * `custom_property_values` - Alterações nos valores de propriedades personalizadas + * `delete` - Exclusão de branch ou tag + * `deployment` - Atividades de implantação + * `deployment_review` - Processos de revisão de implantação + * `deployment_status` - Atualizações de status de implantação + * `fork` - Eventos de fork do repositório + * `installation_target` - Alterações na instalação do aplicativo GitHub + * `label` - Alterações de rótulos em problemas e solicitações pull + * `member` - Alterações no perfil do membro + * `membership` - Adições e remoções de membros + * `meta` - Alterações de metadados do aplicativo GitHub + * `milestone` - Alterações de marcos + * `organization` - Alterações no nível da organização + * `public` - Alterações de visibilidade do repositório + * `pull_request` - Atividades de pull request + * `pull_request_review` - Atividades de revisão de pull request + * `pull_request_review_comment` - Atividades de comentários de revisão + * `pull_request_review_thread` - Atividades de thread de revisão de pull request + * `push` - Pushs e commits de código + * `release` - Publicações e atualizações de lançamento + * `repository` - Criação, exclusão e modificações de repositórios + * `star` - Eventos de estrela do repositório + * `status` - Atualizações de status de commit + * `team` - Criação e modificações de equipe + * `team_add` - Adições de membros da equipe + * `watch` - Eventos de observação do repositório + + + **Melhor prática de segurança**: Para reduzir a exposição à segurança, siga o princípio do acesso de privilégio mínimo e habilite apenas as permissões mínimas necessárias para as necessidades de sua integração. + + + ### Configurar webhooks + + Configure a URL do Webhook e crie um Event Secret personalizado para comunicação segura: + + * **URL do Webhook**: Use o seguinte formato com base na implantação do serviço de coletor: + + * Para HTTP: `http://your-domain-name/github/sync/webhook` + * Para HTTPS: `https://your-domain-name/github/sync/webhook` + + **Exemplo**: Se o seu serviço de coleta for implantado em `collector.yourcompany.com`, a URL do webhook seria: `https://collector.yourcompany.com/github/sync/webhook` + + * **Segredo do evento**: Gere uma string aleatória segura (32+ caracteres) para autenticação de webhook. Salve este valor, pois você precisará dele para a variável de ambiente `GITHUB_APP_WEBHOOK_SECRET`. + + ### Gerar e converter chaves + + 1. Após criar o GitHub App, você precisa gerar uma chave privada. Nas configurações do seu GitHub App, clique em **Generate a private key**. O aplicativo gerará e fará o download automaticamente de um ID de aplicativo exclusivo e um arquivo de Chave Privada (formato .pem). Salve-os com segurança, pois eles serão necessários para a configuração do serviço de coletor. + + 2. Converta seu arquivo de chave privada baixado para o formato DER e, em seguida, codifique-o em Base64: + + **Passo 1: Converter .pem para formato DER** + + ```bash + openssl rsa -outform der -in private-key.pem -out output.der + ``` + + **Passo 2: Codificar o arquivo DER em Base64** + + ```bash + # For Linux/macOS + base64 -i output.der -o outputBase64 + cat outputBase64 # Copy this output + + # For Windows (using PowerShell) + [Convert]::ToBase64String([IO.File]::ReadAllBytes("output.der")) + + # Alternative for Windows (using certutil) + certutil -encode output.der temp.b64 && findstr /v /c:- temp.b64 + ``` + + Copie a string Base64 resultante e use-a como o valor da variável de ambiente `GITHUB_APP_PRIVATE_KEY` na configuração do seu coletor. + + **✓ Indicadores de sucesso:** + + * O aplicativo Github foi criado com sucesso + * ID do aplicativo e chave privada são salvos com segurança + * URL do Webhook configurada e acessível + + + + ## Prepare as variáveis de ambiente + + Antes de implantar o serviço de coletor, reúna as seguintes informações: + + ### Variáveis de ambiente obrigatórias + +
+ Solution
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ Variável + + Fonte + + Como obter +
+ `NR_API_KEY` + + New Relic + + Gere uma chave de API no painel New Relic. +
+ `NR_LICENSE_KEY` + + New Relic + + Gere uma Chave de Licença no painel New Relic. +
+ `GHE_BASE_URL` + + Servidor GHE + + A URL base para seu servidor GHE (por exemplo, + + `https://source.datanot.us` + + ). +
+ `GITHUB_APP_ID` + + GitHub App + + O ID exclusivo do aplicativo gerado quando você criou o GitHub App. +
+ `GITHUB_APP_PRIVATE_KEY` + + GitHub App + + O conteúdo do arquivo de chave privada ( + + `.pem` + + ), convertido em uma string Base64. Consulte a etapa 1 para obter instruções de conversão. +
+ `GITHUB_APP_WEBHOOK_SECRET` + + GitHub App + + O valor do Event Secret personalizado que você definiu ao criar o GitHub App. +
+ + ### Variáveis de ambiente SSL opcionais + + As seguintes são variáveis de ambiente opcionais para fazer HTTPS da API. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ Variável opcional + + Fonte + + Como obter +
+ `SERVER_SSL_KEY_STORE` + + Configuração SSL + + Caminho para o arquivo de keystore SSL para configuração HTTPS. Consulte as instruções de configuração do certificado SSL abaixo. +
+ `SERVER_SSL_KEY_STORE_PASSWORD` + + Configuração SSL + + Senha para o arquivo do keystore SSL. Esta é a senha que você define ao criar o keystore PKCS12. +
+ `SERVER_SSL_KEY_STORE_TYPE` + + Configuração SSL + + Tipo do keystore SSL (por exemplo, PKCS12, JKS). Use PKCS12 ao seguir as instruções de configuração SSL abaixo. +
+ `SERVER_SSL_KEY_ALIAS` + + Configuração SSL + + Alias para a chave SSL dentro do keystore. Este é o nome que você especifica ao criar o keystore. +
+ `SERVER_PORT` + + Configuração SSL + + Porta do servidor para comunicação HTTPS. Use 8443 para HTTPS. +
+ + ### Instruções de configuração do certificado SSL + + Para obter um certificado SSL de uma Autoridade de Certificação (CA) confiável para configuração HTTPS, siga estas etapas: + + 1. **Gerar uma chave privada e uma Solicitação de Assinatura de Certificado (CSR)**: + + ```bash + openssl req -new -newkey rsa:2048 -nodes -keyout mycert.key -out mycert.csr + ``` + + 2. **Enviar o CSR para a CA escolhida**: Envie o arquivo `mycert.csr` para a Autoridade de Certificação (CA) escolhida (por exemplo, DigiCert, Let's Encrypt, GoDaddy). + + 3. **Validação completa do domínio**: Conclua todas as etapas de validação de domínio necessárias, conforme instruído pela AC. + + 4. **Baixar o certificado**: Baixe os arquivos de certificado emitidos da CA (comumente um arquivo `.crt` ou `.pem`). + + 5. **Crie um keystore PKCS12**: Combine o certificado e a chave privada em um keystore PKCS12: + + ```bash + openssl pkcs12 -export -in mycert.crt -inkey mycert.key -out keystore.p12 -name mycert + ``` + + 6. **Use o keystore**: Use o arquivo `keystore.p12` gerado como o valor para `SERVER_SSL_KEY_STORE` na sua configuração do Docker. + + + + ## Implantar o serviço de coletor + + O serviço de coletor é fornecido como uma imagem Docker. A implantação pode ser feita de duas maneiras: + + ### Opção A: Usando o Docker Compose (recomendado) + + Crie um arquivo Docker Compose que automatize o download e a implantação do serviço. + + 1. Crie um arquivo `docker-compose.yml` com o seguinte conteúdo: + + ```yaml + version: '3.9' + + services: + nr-ghe-collector: + image: newrelic/nr-ghe-collector:tag # use latest tag available in dockerhub starting with v* + container_name: nr-ghe-collector + restart: unless-stopped + ports: + - "8080:8080" # HTTP port, make 8443 in case of HTTPS + environment: + # Required environment variables + - NR_API_KEY=${NR_API_KEY:-DEFAULT_VALUE} + - NR_LICENSE_KEY=${NR_LICENSE_KEY:-DEFAULT_VALUE} + - GHE_BASE_URL=${GHE_BASE_URL:-DEFAULT_VALUE} + - GITHUB_APP_ID=${GITHUB_APP_ID:-DEFAULT_VALUE} + - GITHUB_APP_PRIVATE_KEY=${GITHUB_APP_PRIVATE_KEY:-DEFAULT_VALUE} + - GITHUB_APP_WEBHOOK_SECRET=${GITHUB_APP_WEBHOOK_SECRET:-DEFAULT_VALUE} + + # Optional SSL environment variables (uncomment and configure if using HTTPS) + # - SERVER_SSL_KEY_STORE=${SERVER_SSL_KEY_STORE} + # - SERVER_SSL_KEY_STORE_PASSWORD=${SERVER_SSL_KEY_STORE_PASSWORD} + # - SERVER_SSL_KEY_STORE_TYPE=${SERVER_SSL_KEY_STORE_TYPE} + # - SERVER_SSL_KEY_ALIAS=${SERVER_SSL_KEY_ALIAS} + # - SERVER_PORT=8443 + #volumes: # Uncomment the line below if using SSL keystore + # - ./keystore.p12:/app/keystore.p12 # path to your keystore file + network_mode: bridge + + networks: + nr-network: + driver: bridge + ``` + + 2. Defina suas variáveis de ambiente substituindo os espaços reservados `DEFAULT_VALUE` no arquivo Docker Compose pelos seus valores reais ou crie variáveis de ambiente no seu sistema antes de executar o comando. + + + Nunca confirme arquivos de ambiente contendo segredos no controle de versão. Use práticas seguras de gerenciamento de segredos em produção. + + + 3. Execute o seguinte comando para iniciar o serviço: + + ```bash + docker-compose up -d + ``` + + ### Opção B: Execução direta da imagem Docker + + Você pode baixar a imagem Docker diretamente do nosso [registro do Docker Hub](https://hub.docker.com/r/newrelic/nr-ghe-collector) e executá-la usando o pipeline CI/CD ou método de implantação preferido da sua organização. Observe que o cliente precisa passar todas as variáveis de ambiente listadas acima ao iniciar o serviço de coletor. + + **✓ Indicadores de sucesso:** + + * O serviço do coletor está em execução e acessível na porta configurada + * Os logs do contêiner Docker mostram uma inicialização bem-sucedida sem erros + * O serviço responde a verificações de integridade (se configurado) + + + + ## Instale o aplicativo GitHub nas organizações + + Após a execução do serviço de coletor, você precisa instalar o aplicativo GitHub nas organizações específicas que deseja integrar: + + 1. Navegue até sua instância do GitHub Enterprise. + 2. Vá para **Settings** → **Developer Settings** → **GitHub Apps**. + 3. Encontre o aplicativo GitHub que você criou na etapa 1 e clique nele. + 4. Na barra lateral esquerda, clique em **Install App**. + 5. Selecione as organizações onde deseja instalar o aplicativo. + 6. Escolha se deseja instalar em todos os repositórios ou selecionar repositórios específicos. + 7. Clique em **Install** para concluir a instalação. + + **✓ Indicadores de sucesso:** + + * As entregas de webhook aparecem nas configurações do GitHub App + * Nenhum erro de autenticação nos logs do serviço de coletor + + + + ## Conclua a configuração da integração na interface do usuário do New Relic + + Depois que o serviço de coleta estiver em execução e o GitHub App estiver instalado em sua(s) organização(ões) GHE, conclua a configuração da integração conforme instruído na interface do usuário do New Relic: + + 1. As organizações GHE correspondentes aparecerão na interface do usuário do New Relic. + + 2. Para iniciar a sincronização inicial de dados, clique em **First time sync**. + + 3. *(Opcional)* Clique em **On-demand sync** para sincronizar manualmente os dados. + + + Você pode sincronizar manualmente os dados uma vez a cada 4 horas. O botão de **On-demand sync** permanece desabilitado se a sincronização tiver ocorrido nas últimas 4 horas. + + + 4. Após visualizar a mensagem Sync started (Sincronização iniciada), clique em **Continue** (Continuar). A tela **Integração do GitHub Enterprise** exibe a contagem de equipes e repositórios, atualizando a cada 5 segundos. Aguarde de 15 a 30 minutos para a importação completa de todos os dados (o tempo depende da contagem de repositórios). + + GitHub Enterprise Integration dashboard showing integration progress + + ### Visualizando seus dados + + Na tela **GitHub Enterprise Integration**: + + * Para visualizar as informações das equipes importadas em [Teams](/docs/service-architecture-intelligence/teams/teams), clique em **Go to Teams**. + * Para visualizar as informações dos repositórios importados em [Catalogs](/docs/service-architecture-intelligence/catalogs/catalogs), clique em **Go to Repositories**. + + + + ## Configure as atribuições da equipe (opcional) + + Você pode autoatribuir repositórios do GitHub às suas equipes adicionando `teamOwningRepo` como uma propriedade personalizada no GitHub Enterprise. + + 1. Crie a propriedade personalizada no nível da organização e atribua um valor para ela no nível do repositório. Além disso, você pode configurar uma propriedade personalizada para vários repositórios no nível da organização simultaneamente. + 2. Em seguida, no New Relic Teams, ative o recurso [Automated Ownership](/docs/service-architecture-intelligence/teams/manage-teams/#assign-ownership), certificando-se de usar `team` como a chave de tag. + + Depois de configurado, o New Relic corresponde automaticamente cada repositório com sua equipe correta. + + Para obter mais informações sobre como criar propriedades personalizadas, consulte a [documentação do GitHub](https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization). + + + +## Resolução de problemas + +### Problemas e soluções comuns + +**Falhas na entrega do webhook:** + +* Verifique se o serviço de coleta está em execução e acessível no GitHub Enterprise +* Verifique as configurações do firewall e a conectividade de rede + +**Erros de autenticação:** + +* Verifique se o ID do GitHub App e a chave privada estão configurados corretamente +* Certifique-se de que a chave privada esteja devidamente convertida para o formato DER e codificada em Base64 +* Verifique se o segredo do webhook corresponde entre o aplicativo GitHub e a configuração do coletor + +**Falhas de sincronização:** + +* Verifique se o aplicativo GitHub tem as permissões necessárias +* Verifique se o aplicativo está instalado nas organizações corretas +* Revise os logs do serviço de coleta para mensagens de erro específicas + +**Problemas de conectividade de rede:** + +* Certifique-se de que o serviço de coletor possa alcançar sua instância do GitHub Enterprise +* Verifique se os certificados SSL estão configurados corretamente, se estiver usando HTTPS +* Verifique a resolução DNS para o seu domínio GitHub Enterprise + +## Desinstalação + +Para desinstalar a integração do GitHub Enterprise: + +1. Navegue até a interface do usuário do GitHub Enterprise. +2. Vá para as configurações da organização onde o aplicativo está instalado. +3. Desinstale o aplicativo GitHub diretamente da interface do GitHub Enterprise. Esta ação acionará o processo de back-end para interromper a coleta de dados. +4. Pare e remova o serviço de coleta do seu ambiente Docker. \ No newline at end of file