Skip to content

Commit 15de31f

Browse files
authored
Merge pull request #62711 from sradco/update_metrics_names
Updated kubevirt metrics names
2 parents 6d2ec48 + 528e8d0 commit 15de31f

File tree

2 files changed

+15
-16
lines changed

2 files changed

+15
-16
lines changed

modules/virt-live-migration-metrics.adoc

Lines changed: 8 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -8,19 +8,18 @@
88

99
The following metrics can be queried to show live migration status:
1010

11-
`kubevirt_migrate_vmi_data_processed_bytes`:: The amount of guest operating system data that has migrated to the new virtual machine (VM). Type: Gauge.
11+
`kubevirt_vmi_migration_data_processed_bytes`:: The amount of guest operating system data that has migrated to the new virtual machine (VM). Type: Gauge.
1212

13-
`kubevirt_migrate_vmi_data_remaining_bytes`:: The amount of guest operating system data that remains to be migrated. Type: Gauge.
13+
`kubevirt_vmi_migration_data_remaining_bytes`:: The amount of guest operating system data that remains to be migrated. Type: Gauge.
1414

15-
`kubevirt_migrate_vmi_dirty_memory_rate_bytes`:: The rate at which memory is becoming dirty in the guest operating system. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge.
15+
`kubevirt_vmi_migration_memory_transfer_rate_bytes`:: The rate at which memory is becoming dirty in the guest operating system. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge.
1616

17-
`kubevirt_migrate_vmi_pending_count`:: The number of pending migrations. Type: Gauge.
17+
`kubevirt_vmi_migrations_in_pending_phase`:: The number of pending migrations. Type: Gauge.
1818

19-
`kubevirt_migrate_vmi_scheduling_count`:: The number of scheduling migrations. Type: Gauge.
19+
`kubevirt_vmi_migrations_in_scheduling_phase`:: The number of scheduling migrations. Type: Gauge.
2020

21-
`kubevirt_migrate_vmi_running_count`:: The number of running migrations. Type: Gauge.
21+
`kubevirt_vmi_migrations_in_running_phase`:: The number of running migrations. Type: Gauge.
2222

23-
`kubevirt_migrate_vmi_succeeded`:: The number of successfully completed migrations. Type: Gauge.
24-
25-
`kubevirt_migrate_vmi_failed`:: The number of failed migrations. Type: Gauge.
23+
`kubevirt_vmi_migration_succeeded`:: The number of successfully completed migrations. Type: Gauge.
2624

25+
`kubevirt_vmi_migration_failed`:: The number of failed migrations. Type: Gauge.

modules/virt-querying-metrics.adoc

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ The following examples use `topk` queries that specify a time period. If virtual
1717

1818
The following query can identify virtual machines that are waiting for Input/Output (I/O):
1919

20-
`kubevirt_vmi_vcpu_wait_seconds`::
20+
`kubevirt_vmi_vcpu_wait_seconds_total`::
2121
Returns the wait time (in seconds) for a virtual machine's vCPU. Type: Counter.
2222

2323
A value above '0' means that the vCPU wants to run, but the host scheduler cannot run it yet. This inability to run indicates that there is an issue with I/O.
@@ -30,7 +30,7 @@ To query the vCPU metric, the `schedstats=enable` kernel argument must first be
3030
.Example vCPU wait time query
3131
[source,promql]
3232
----
33-
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds[6m]))) > 0 <1>
33+
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds_total[6m]))) > 0 <1>
3434
----
3535
<1> This query returns the top 3 VMs waiting for I/O at every given moment over a six-minute time period.
3636

@@ -76,7 +76,7 @@ topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_t
7676
[id="virt-storage-snapshot-data_{context}"]
7777
=== Storage snapshot data
7878

79-
`kubevirt_vmsnapshot_disks_restored_from_source_total`::
79+
`kubevirt_vmsnapshot_disks_restored_from_source`::
8080
Returns the total number of virtual machine disks restored from the source virtual machine. Type: Gauge.
8181

8282
`kubevirt_vmsnapshot_disks_restored_from_source_bytes`::
@@ -85,7 +85,7 @@ Returns the amount of space in bytes restored from the source virtual machine. T
8585
.Examples of storage snapshot data queries
8686
[source,promql]
8787
----
88-
kubevirt_vmsnapshot_disks_restored_from_source_total{vm_name="simple-vm", vm_namespace="default"} <1>
88+
kubevirt_vmsnapshot_disks_restored_from_source{vm_name="simple-vm", vm_namespace="default"} <1>
8989
----
9090
<1> This query returns the total number of virtual machine disks restored from the source virtual machine.
9191

@@ -118,16 +118,16 @@ topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])
118118

119119
The following queries can identify which swap-enabled guests are performing the most memory swapping:
120120

121-
`kubevirt_vmi_memory_swap_in_traffic_bytes_total`::
121+
`kubevirt_vmi_memory_swap_in_traffic_bytes`::
122122
Returns the total amount (in bytes) of memory the virtual guest is swapping in. Type: Gauge.
123123

124-
`kubevirt_vmi_memory_swap_out_traffic_bytes_total`::
124+
`kubevirt_vmi_memory_swap_out_traffic_bytes`::
125125
Returns the total amount (in bytes) of memory the virtual guest is swapping out. Type: Gauge.
126126

127127
.Example memory swapping query
128128
[source,promql]
129129
----
130-
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes_total[6m]))) > 0 <1>
130+
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes[6m]))) > 0 <1>
131131
----
132132
<1> This query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period.
133133

0 commit comments

Comments
 (0)