Skip to content
Merged
35 changes: 21 additions & 14 deletions web/locales/en/plugin__netobserv-plugin.json
Original file line number Diff line number Diff line change
Expand Up @@ -168,21 +168,25 @@
"Namespaces": "Namespaces",
"Pods": "Pods",
"Recommendations": "Recommendations",
"The example outlined in the table demonstrates a scenario that is tailored to your workload. Consider this example only as a baseline from which adjustments can be made to accommodate your needs.": "The example outlined in the table demonstrates a scenario that is tailored to your workload. Consider this example only as a baseline from which adjustments can be made to accommodate your needs.",
"vCPU": "vCPU",
"Memory": "Memory",
"LokiStack size": "LokiStack size",
"Kafka": "Kafka",
"Estimation": "Estimation",
"Sampling": "Sampling",
"(current)": "(current)",
"Performance tuning and estimation": "Performance tuning and estimation",
"The sampling interval is one of the main settings used to balance performance and accuracy. The lower the interval, the higher the accuracy.": "The sampling interval is one of the main settings used to balance performance and accuracy. The lower the interval, the higher the accuracy.",
"Use the slider below to configure the desired sampling interval.": "Use the slider below to configure the desired sampling interval.",
"Sampling interval": "Sampling interval",
"The estimations are based on the number of nodes in the cluster and the sampling rate. They do not take into account the number of namespaces or pods, as their impact is comparatively lower than that of nodes.\nThey are calculated using a linear regression model based on data collected from various OpenShift clusters. Actual resource consumption may vary depending on your specific workload and cluster configuration.": "The estimations are based on the number of nodes in the cluster and the sampling rate. They do not take into account the number of namespaces or pods, as their impact is comparatively lower than that of nodes.\nThey are calculated using a linear regression model based on data collected from various OpenShift clusters. Actual resource consumption may vary depending on your specific workload and cluster configuration.",
"There is some issue in this form view. Please select \"YAML view\" for full control.": "There is some issue in this form view. Please select \"YAML view\" for full control.",
"Note: Some fields may not be represented in this form view. Please select \"YAML view\" for full control.": "Note: Some fields may not be represented in this form view. Please select \"YAML view\" for full control.",
"(see more...)": "(see more...)",
"Remove {{singularLabel}}": "Remove {{singularLabel}}",
"Add {{singularLabel}}": "Add {{singularLabel}}",
"Error": "Error",
"Fix the following errors:": "Fix the following errors:",
"Enabled": "Enabled",
"Disabled": "Disabled",
"True": "True",
"False": "False",
"Select {{title}}": "Select {{title}}",
"Configure via:": "Configure via:",
"Form view": "Form view",
Expand All @@ -201,20 +205,22 @@
"Create FlowCollector": "Create FlowCollector",
"Network Observability FlowCollector setup": "Network Observability FlowCollector setup",
"Overview": "Overview",
"Network Observability Operator deploys a monitoring pipeline that consists in:\n - an eBPF agent, that generates network flows from captured packets\n - flowlogs-pipeline, a component that collects, enriches and exports these flows\n - a Console plugin for flows visualization with powerful filtering options, a topology representation and more\n\nFlow data is then available in multiple ways, each optional:\n - As Prometheus metrics\n - As raw flow logs stored in Grafana Loki\n - As raw flow logs exported to a collector\n\nThe FlowCollector resource is used to configure the operator and its managed components.\nThis setup will guide you on the common aspects of the FlowCollector configuration.": "Network Observability Operator deploys a monitoring pipeline that consists in:\n - an eBPF agent, that generates network flows from captured packets\n - flowlogs-pipeline, a component that collects, enriches and exports these flows\n - a Console plugin for flows visualization with powerful filtering options, a topology representation and more\n\nFlow data is then available in multiple ways, each optional:\n - As Prometheus metrics\n - As raw flow logs stored in Grafana Loki\n - As raw flow logs exported to a collector\n\nThe FlowCollector resource is used to configure the operator and its managed components.\nThis setup will guide you on the common aspects of the FlowCollector configuration.",
"The FlowCollector resource is used to configure the Network Observability operator and its managed components. When it is created, network flows start being collected.": "The FlowCollector resource is used to configure the Network Observability operator and its managed components. When it is created, network flows start being collected.",
"This wizard is a helper to create a first FlowCollector resource. It does not cover all the available configuration options, but only the most common ones.\nFor advanced configuration, please use YAML or the": "This wizard is a helper to create a first FlowCollector resource. It does not cover all the available configuration options, but only the most common ones.\nFor advanced configuration, please use YAML or the",
"FlowCollector form": "FlowCollector form",
", which includes more options such as:\n- Filtering options\n- Configuring custom exporters\n- Custom labels based on IP\n- Pod identification for secondary networks\n- Performance fine-tuning\nYou can always edit a FlowCollector later when you start with the simplified configuration.": ", which includes more options such as:\n- Filtering options\n- Configuring custom exporters\n- Custom labels based on IP\n- Pod identification for secondary networks\n- Performance fine-tuning\nYou can always edit a FlowCollector later when you start with the simplified configuration.",
"Operator configuration": "Operator configuration",
"Capture": "Capture",
"Pipeline": "Pipeline",
"Storage": "Storage",
"Integration": "Integration",
"Processing": "Processing",
"Consumption": "Consumption",
"Review": "Review",
"Submit": "Submit",
"Network Observability FlowMetric setup": "Network Observability FlowMetric setup",
"You can create custom metrics out of the flowlogs data using the FlowMetric API. In every flowlogs data that is collected, there are a number of fields labeled per log, such as source name and destination name. These fields can be leveraged as Prometheus labels to enable the customization of cluster information on your dashboard.\nThis setup will guide you on the common aspects of the FlowMetric configuration.": "You can create custom metrics out of the flowlogs data using the FlowMetric API. In every flowlogs data that is collected, there are a number of fields labeled per log, such as source name and destination name. These fields can be leveraged as Prometheus labels to enable the customization of cluster information on your dashboard.\nThis setup will guide you on the common aspects of the FlowMetric configuration.",
"General configuration": "General configuration",
"You can create custom metrics out of the network flows using the FlowMetric API. A FlowCollector resource must be created as well in order to produce the flows. Each flow consists in a set of fields with values, such as source name and destination name. These fields can be leveraged as Prometheus labels to enable customized metrics and dashboards.": "You can create custom metrics out of the network flows using the FlowMetric API. A FlowCollector resource must be created as well in order to produce the flows. Each flow consists in a set of fields with values, such as source name and destination name. These fields can be leveraged as Prometheus labels to enable customized metrics and dashboards.",
"This simplified setup guides you through the common aspects of the FlowMetric configuration. For advanced configuration, please use YAML or the ": "This simplified setup guides you through the common aspects of the FlowMetric configuration. For advanced configuration, please use YAML or the ",
"FlowMetric form": "FlowMetric form",
"Resource configuration": "Resource configuration",
"Metric": "Metric",
"Data": "Data",
"Charts": "Charts",
"Review": "Review",
"Update {{kind}}": "Update {{kind}}",
"Create {{kind}}": "Create {{kind}}",
"Update by completing the form. Current values are from the existing resource.": "Update by completing the form. Current values are from the existing resource.",
Expand Down Expand Up @@ -400,6 +406,7 @@
"from": "from",
"in": "in",
"Configuration": "Configuration",
"Sampling": "Sampling",
"Max chunk age": "Max chunk age",
"Version": "Version",
"Number": "Number",
Expand Down
Loading