|
| 1 | +--- |
| 2 | +id: confluent-cloud-metrics-source |
| 3 | +title: Confluent Cloud Metrics Source |
| 4 | +sidebar_label: Confluent Cloud Metrics |
| 5 | +tags: |
| 6 | + - cloud-to-cloud |
| 7 | + - confluent-cloud-metrics |
| 8 | +description: The Confluent Cloud Metrics source aims to collect metric data from the Confluent Cloud Metrics platform API and send them to Sumo Logic. |
| 9 | +--- |
| 10 | + |
| 11 | +import CodeBlock from '@theme/CodeBlock'; |
| 12 | +import ExampleJSON from '/files/c2c/confluent-cloud-metrics/example.json'; |
| 13 | +import MyComponentSource from '!!raw-loader!/files/c2c/confluent-cloud-metrics/example.json'; |
| 14 | +import TerraformExample from '!!raw-loader!/files/c2c/confluent-cloud-metrics/example.tf'; |
| 15 | +import ForwardToSiem from '/docs/reuse/forward-to-siem.md'; |
| 16 | +import useBaseUrl from '@docusaurus/useBaseUrl'; |
| 17 | + |
| 18 | +<img src={useBaseUrl('img/send-data/confluent-cloud-metrics.png')} alt="icon" width="160"/> |
| 19 | + |
| 20 | +Confluent is a software company that helps organizations manage, deploy, and scale real-time data infrastructure, enabling businesses to build real-time applications and derive insights from data efficiently. |
| 21 | +Confluent Cloud is a scalable, fully managed streaming data service based on Apache Kafka®. It offers a web interface called the Cloud Console for managing resources, settings, and billing, along with a local Command Line Interface (CLI) and REST APIs to create and manage Kafka topics. |
| 22 | +This integration aims to collect metric data from the Confluent Cloud Metrics platform API and send them to Sumo Logic. |
| 23 | + |
| 24 | +## Data collected |
| 25 | + |
| 26 | +| Polling Interval | Data | |
| 27 | +| :-- | :-- | |
| 28 | +| 5 minutes | [Export metric values API](https://api.telemetry.confluent.cloud/docs?&_ga=2.117120000.763533315.1738005875-728715252.1738005875&_gl=1*fkaiwi*_gcl_au*MTkyNzY5NzMuMTczODAwNTg3NA..*_ga*NzI4NzE1MjUyLjE3MzgwMDU4NzU.*_ga_D2D3EGKSGD*MTczODAwNTg3NC4xLjEuMTczODAwNTk2NS42MC4wLjA.#tag/Version-2/paths/~1v2~1metrics~1%7Bdataset%7D~1export/get) | |
| 29 | + |
| 30 | +## Setup |
| 31 | + |
| 32 | +### Vendor configuration |
| 33 | + |
| 34 | +The Confluent Cloud Metrics source requires you to provide the **Client ID (API Key ID)** and the **Client Secret (API Secret)** to access the data. |
| 35 | +To generate the Client ID and Client Secret, refer to the [cloud API key generation](https://docs.confluent.io/cloud/current/monitoring/metrics-api.html#add-the-metricsviewer-role-to-a-new-service-account) in your Confluent Cloud account. |
| 36 | + |
| 37 | +### Source configuration |
| 38 | + |
| 39 | +When you create a Confluent Cloud Metrics source, you add it to a Hosted Collector. Before creating the source, identify the Hosted Collector you want to use or create a new Hosted Collector. For instructions, see [Configure a Hosted Collector and Source](/docs/send-data/hosted-collectors/configure-hosted-collector). |
| 40 | + |
| 41 | +To configure a Confluent Cloud Metrics source: |
| 42 | +1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Collection > Collection**. <br/>[**New UI**](/docs/get-started/sumo-logic-ui). In the Sumo Logic top menu select **Configuration**, and then under **Data Collection** select **Collection**. You can also click the **Go To...** menu at the top of the screen and select **Collection**. |
| 43 | +1. On the Collection page, click **Add Source** next to a Hosted Collector. |
| 44 | +1. Search for and select **Confluent Metrics**. |
| 45 | +1. Enter a **Name** for the source. The description is optional. |
| 46 | +1. (Optional) For **Source Category**, enter any string to tag the output collected from the source. Category metadata is stored in a searchable field called `_sourceCategory`. |
| 47 | +1. (Optional) **Fields**. Click the **+Add** button to define the fields you want to associate. Each field needs a name (key) and value. |
| 48 | + *  A green circle with a check mark is shown when the field exists in the Fields table schema. |
| 49 | + *  An orange triangle with an exclamation point is shown when the field doesn't exist in the Fields table schema. In this case, an option to automatically add the nonexistent fields to the Fields table schema is provided. If a field is sent to Sumo Logic that does not exist in the Fields schema is ignored, known as dropped. |
| 50 | +1. **API Key ID**. Enter the Client ID collected from the [vendor configuration](#vendor-configuration). For example, `U5XXXYZYGAXXXFRZ`. |
| 51 | +1. **API Secret**. Enter the Client Secret collected from the [vendor configuration](#vendor-configuration). For example, `psYDINXXXG9eYi9hF/X20SZAI4YEn5IZ0cXXXuZ556WIbKYvHPHSCTXXXyF`. |
| 52 | +1. **Resource Filters**. Select the checkbox to collect metrics for the required resources, and then enter the ID of the relevant resource to export metrics. |
| 53 | +1. (Optional) **Resource Filter**. Select the checkbox to specify the metric to export. If this parameter is not specified, all metrics for the resource will be exported. |
| 54 | +1. (Optional) **Processing Rules for Logs**. Configure any desired filters, such as allowlist, denylist, hash, or mask, as described in [Create a Processing Rule](/docs/send-data/collection/processing-rules/create-processing-rule). |
| 55 | +1. When you are finished configuring the source, click **Save**. |
| 56 | + |
| 57 | +## JSON schema |
| 58 | + |
| 59 | +Sources can be configured using UTF-8 encoded JSON files with the Collector Management API. See [Use JSON to Configure Sources](/docs/send-data/use-json-configure-sources) for details. |
| 60 | + |
| 61 | +| Parameter | Type | Value | Required | Description | |
| 62 | +|:--|:--|:--|:--|:--| |
| 63 | +| schemaRef | JSON Object | `{“type”: “Confluent Cloud Metrics”}` | Yes | Define the specific schema type. | |
| 64 | +| sourceType | String | `"Universal"` | Yes | Type of source. | |
| 65 | +| config | JSON Object | [Configuration object](#configuration-object) | Yes | Source type specific values. | |
| 66 | + |
| 67 | +### Configuration Object |
| 68 | + |
| 69 | +| Parameter | Type | Required | Default | Description | Example | |
| 70 | +|:--|:--|:--|:--|:--|:--| |
| 71 | +| name | String | Yes | `null` | Type a desired name of the source. The name must be unique per Collector. This value is assigned to the [metadata](/docs/search/get-started-with-search/search-basics/built-in-metadata) field `_source`. | `"mySource"` | |
| 72 | +| description | String | No | `null` | Type a description of the source. | `"Testing source"` | |
| 73 | +| category | String | No | `null` | Type a category of the source. This value is assigned to the [metadata](/docs/search/get-started-with-search/search-basics/built-in-metadata) field `_sourceCategory`. See [best practices](/docs/send-data/best-practices) for details. | `"mySource/test"` | |
| 74 | +| fields | JSON Object | No | `null` | JSON map of key-value fields (metadata) to apply to the collector or source. Use the boolean field `_siemForward` to enable forwarding to SIEM.| `{"_siemForward": false, "fieldA": "valueA"}` | |
| 75 | +| clientId | String | Yes | `null` | API Key ID generated from the Cloud API key in your Confluent Cloud account. | `U5XXXYZYGAXXXFRZ` | |
| 76 | +| clientSecret | String | Yes | `null` | API Key Secret generated from the Cloud API Key in your Confluent Cloud account. | `psYDINXXXG9eYi9hF/X20SZAI4YEn5IZ0cXXXuZ556WIbKYvHPHSCTXXXyF` | |
| 77 | +| resourceKafkaId | Boolean | No | `False` | The boolean value for collecting the metrics for Kafka IDs. | | |
| 78 | +| resourceConnectorId | Boolean | No | `False` | The boolean value for collecting the metrics for collector IDs. | | |
| 79 | +| resourceKSQLId | Boolean | No | `False` | The boolean value for collecting the metrics for kSQL IDs. | | |
| 80 | +| resourceSchemaRegistryId | Boolean | No | `False` | The boolean value for collecting the metrics for SchemaRegistry IDs. | | |
| 81 | +| resourceComputePoolId | Boolean | No | `False` | The boolean value for collecting the metrics for ComputePool IDs. | | |
| 82 | +| kafkaId | []String | No | `False` | The ID of the Kafka cluster to export metrics for. | | |
| 83 | +| connectorId | []String | No | `False` | The ID of the Connector to export metrics for. | | |
| 84 | +| ksqlId | []String | No | `False` | The ID of the ksqlDB application to export metrics for. | | |
| 85 | +| schemaRegistryId | []String | No | `False` | The ID of the Schema Registry to export metrics for. | | |
| 86 | +| computepoolId | []String | No | `False` | The ID of the Flink Compute Pool to export metrics for. | | |
| 87 | +| metric | []String | No | `False` | The metric to export. If this parameter is not specified, all metrics for the resource will be exported. | | |
| 88 | +| ignoreFailedMetrics | Boolean | No | `False` | Ignore failed metrics and export only successful metrics if the allowed failure threshold is not breached. If this parameter is set to true, a StateSet metric (export_status) will be included in the response to report which metrics were successful and which failed. | | |
| 89 | +| pollingIntervalMin | Integer | Yes | `5` | Time interval (in minutes) after which the source will check for new data from the source API | | |
| 90 | + |
| 91 | +### JSON example |
| 92 | + |
| 93 | +<CodeBlock language="json">{MyComponentSource}</CodeBlock> |
| 94 | + |
| 95 | +<a href="/files/c2c/confluent-cloud-metrics/example.json" target="_blank">Download example</a> |
| 96 | + |
| 97 | +### Terraform example |
| 98 | + |
| 99 | +<CodeBlock language="json">{TerraformExample}</CodeBlock> |
| 100 | + |
| 101 | +<a href="/files/c2c/confluent-cloud-metrics/example.tf" target="_blank">Download example</a> |
| 102 | + |
| 103 | +## FAQ |
| 104 | + |
| 105 | +:::info |
| 106 | +Click [here](/docs/c2c/info) for more information about Cloud-to-Cloud sources. |
| 107 | +::: |
0 commit comments