Skip to content

Commit 6bfdb7d

Browse files
committed
doc to do
1 parent d514239 commit 6bfdb7d

File tree

1 file changed

+232
-0
lines changed

1 file changed

+232
-0
lines changed

docs/config/ctl.rst

Lines changed: 232 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,232 @@
1+
================
2+
Introduction
3+
================
4+
5+
UMF's CTL is a mechanism for advanced configuration and control of UMF pools
6+
and providers. It allows programmatic access to provider- or pool-specific
7+
configuration options, statistics and auxiliary APIs. CTL entries can also be
8+
set through environment variables or a configuration file, allowing adjustment
9+
of UMF behavior without modifying the program.
10+
11+
Main concepts
12+
=============
13+
14+
The core concept is a *path*. A path is a string of nodes separated by periods.
15+
You can imagine nodes as directories where the last element is a file that can
16+
be read, written or executed (similar to ``sysfs`` but with periods instead of
17+
slashes). Example path ``umf.logger.level`` controls the log level. You can
18+
access it with::
19+
20+
int level;
21+
umf_result_t ret = umfCtlGet("umf.logger.level", &level, sizeof(level));
22+
23+
To change the level programmatically use::
24+
25+
int level = LOG_WARNING;
26+
umf_result_t ret = umfCtlSet("umf.logger.level", &level, sizeof(level));
27+
28+
Accessing pool or provider paths is slightly more involved. For example::
29+
30+
size_t alloc_count;
31+
umf_memory_pool_handle_t hPool = createPool();
32+
umf_result_t ret = umfCtlGet("umf.pool.by_handle.{}.stats.alloc_count",
33+
&alloc_count, sizeof(alloc_count), hPool);
34+
35+
The ``umf.pool.by_handle`` prefix selects a pool addressed by its handle.
36+
Every ``{}`` in the path is replaced with an extra argument passed to the CTL
37+
function. Alternative addressing methods are described below.
38+
39+
Pool / Provider addressing
40+
==========================
41+
42+
Two addressing schemes are provided: ``by_handle`` and ``by_name``. Each pool
43+
and provider has a unique handle and an optional user-defined name that can be
44+
queried with ``umfMemoryProviderGetName()`` or ``umfMemoryPoolGetName()``.
45+
When using ``by_name`` the name appears in the path, e.g.::
46+
47+
umfCtlGet("umf.pool.by_name.myPool.stats.alloc_count",
48+
&alloc_count, sizeof(alloc_count));
49+
50+
If multiple pools share a name, read operations must disambiguate the target by
51+
appending an index after the name::
52+
53+
umfCtlGet("umf.pool.by_name.myPool.0.stats.alloc_count",
54+
&alloc_count, sizeof(alloc_count));
55+
56+
The number of pools with a given name can be obtained with the ``count`` node.
57+
58+
Wildcards
59+
=========
60+
61+
A ``{}`` in the path acts as a wildcard and is replaced with successive
62+
arguments of ``umfCtlGet``, ``umfCtlSet`` or ``umfCtlExec``. Wildcards can
63+
replace any node, not only handles. For example::
64+
65+
size_t pool_count;
66+
const char *name = "myPool";
67+
umfCtlGet("umf.pool.by_name.{}.count", &pool_count, sizeof(pool_count),
68+
name);
69+
for (size_t i = 0; i < pool_count; i++) {
70+
umfCtlGet("umf.pool.by_name.{}.{}.stats.alloc_count", &alloc_count,
71+
sizeof(alloc_count), name, i);
72+
}
73+
74+
Ensure that the types of wildcard arguments match the expected node types.
75+
76+
Defaults
77+
========
78+
79+
``umf.provider.default`` and ``umf.pool.default`` store default values applied
80+
to providers or pools created after the defaults are set. For example::
81+
82+
const char *name = "custom";
83+
umfCtlSet("umf.pool.default.disjoint.name", (void *)name, strlen(name)+1);
84+
85+
Every subsequently created disjoint pool will use ``custom`` as its name unless
86+
overridden by explicit parameters. Defaults may be supplied programmatically or
87+
via configuration and are queued internally until a matching provider or pool is
88+
created. Once the object is initialized the defaults are applied before it is
89+
used, and the application may still override any of them with later
90+
programmatic CTL calls.
91+
92+
Environment variables
93+
=====================
94+
95+
CTL entries may also be specified in the ``UMF_CONF`` environment variable or
96+
a configuration file specified in the ``UMF_CONF_FILE``.
97+
Multiple entries are separated with semicolons, e.g.::
98+
99+
UMF_CONF="umf.logger.output=stdout;umf.logger.level=0"
100+
101+
CTL options avaiable through env variables are limited - you can only use default
102+
when addressing pool, this mean that you can only influence those things are
103+
set up during pool creation.
104+
105+
CTL nodes
106+
=========
107+
108+
The following table lists all currently supported CTL paths. Curly braces
109+
denote wildcards: ``{provider}`` expects a ``umf_memory_provider_handle_t``,
110+
``{pool}`` expects a ``umf_memory_pool_handle_t`` and ``{id}`` is a ``size_t``
111+
bucket index. Paths are shown with ``by_handle`` addressing; ``by_name`` may be
112+
used as an alternative. Where a path is specific to a particular provider or
113+
pool type this is noted in the description.
114+
115+
.. list-table:: Supported CTL paths
116+
:header-rows: 1
117+
:widths: 40 60
118+
119+
* - Path
120+
- Description
121+
* - ``umf.logger.timestamp``
122+
- Enable or disable timestamps in log messages (int, read-write)
123+
* - ``umf.logger.pid``
124+
- Include process id in log messages (int, read-write)
125+
* - ``umf.logger.level``
126+
- Minimum log level written to the output (int, read-write)
127+
* - ``umf.logger.flush_level``
128+
- Level at which the log is flushed (int, read-write)
129+
* - ``umf.logger.output``
130+
- Output destination; ``stdout``, ``stderr`` or a file path (string, read-write)
131+
* - ``umf.provider.by_handle.{provider}.stats.allocated_memory``
132+
- Bytes currently allocated by the provider (all providers, size_t, read-only)
133+
* - ``umf.provider.by_handle.{provider}.stats.peak_memory``
134+
- Peak allocation size since last reset (all providers, size_t, read-only)
135+
* - ``umf.provider.by_handle.{provider}.stats.peak_memory.reset``
136+
- Reset the peak allocation counter (all providers, exec)
137+
* - ``umf.provider.by_handle.{provider}.stats.reset``
138+
- Reset all provider statistics (all providers, exec)
139+
* - ``umf.provider.by_handle.{provider}.params.ipc_enabled``
140+
- Non-zero when IPC is enabled (OS memory provider, int, read-only)
141+
* - ``umf.pool.by_handle.{pool}.stats.used_memory``
142+
- Memory currently used by the pool (disjoint pool, size_t, read-only)
143+
* - ``umf.pool.by_handle.{pool}.stats.reserved_memory``
144+
- Total memory reserved by the pool (disjoint pool, size_t, read-only)
145+
* - ``umf.pool.by_handle.{pool}.stats.alloc_num``
146+
- Number of allocations performed (disjoint pool, size_t, read-only)
147+
* - ``umf.pool.by_handle.{pool}.stats.alloc_pool_num``
148+
- Number of allocations served from the pool (disjoint pool, size_t, read-only)
149+
* - ``umf.pool.by_handle.{pool}.stats.free_num``
150+
- Number of frees performed (disjoint pool, size_t, read-only)
151+
* - ``umf.pool.by_handle.{pool}.stats.curr_slabs_in_use``
152+
- Current slabs in use (disjoint pool, size_t, read-only)
153+
* - ``umf.pool.by_handle.{pool}.stats.curr_slabs_in_pool``
154+
- Current slabs retained in the pool (disjoint pool, size_t, read-only)
155+
* - ``umf.pool.by_handle.{pool}.stats.max_slabs_in_use``
156+
- Peak slabs in use (disjoint pool, size_t, read-only)
157+
* - ``umf.pool.by_handle.{pool}.stats.max_slabs_in_pool``
158+
- Peak slabs retained in the pool (disjoint pool, size_t, read-only)
159+
* - ``umf.pool.by_handle.{pool}.buckets.count``
160+
- Number of bucket sizes (disjoint pool, size_t, read-only)
161+
* - ``umf.pool.by_handle.{pool}.buckets.{id}.size``
162+
- Allocation size served by bucket ``{id}`` (disjoint pool, size_t, read-only)
163+
* - ``umf.pool.by_handle.{pool}.buckets.{id}.stats.alloc_num``
164+
- Allocations performed by bucket ``{id}`` (disjoint pool, size_t, read-only)
165+
* - ``umf.pool.by_handle.{pool}.buckets.{id}.stats.alloc_pool_num``
166+
- Allocations serviced from pool by bucket ``{id}`` (disjoint pool, size_t, read-only)
167+
* - ``umf.pool.by_handle.{pool}.buckets.{id}.stats.free_num``
168+
- Frees performed by bucket ``{id}`` (disjoint pool, size_t, read-only)
169+
* - ``umf.pool.by_handle.{pool}.buckets.{id}.stats.curr_slabs_in_use``
170+
- Slabs in use by bucket ``{id}`` (disjoint pool, size_t, read-only)
171+
* - ``umf.pool.by_handle.{pool}.buckets.{id}.stats.curr_slabs_in_pool``
172+
- Slabs retained in the pool by bucket ``{id}`` (disjoint pool, size_t, read-only)
173+
* - ``umf.pool.by_handle.{pool}.buckets.{id}.stats.max_slabs_in_use``
174+
- Peak slabs in use by bucket ``{id}`` (disjoint pool, size_t, read-only)
175+
* - ``umf.pool.by_handle.{pool}.buckets.{id}.stats.max_slabs_in_pool``
176+
- Peak slabs retained in the pool by bucket ``{id}`` (disjoint pool, size_t, read-only)
177+
178+
179+
Disjoint pool parameters may be written via CTL after pool creation but before
180+
the ``post_initialize`` step. When defaults or explicit writes override these
181+
parameters, the new values are reported with an informational log during
182+
``post_initialize``. Attempts to modify parameters after post-initialization are
183+
rejected.
184+
185+
================================================
186+
Adding CTL support to custom providers and pools
187+
================================================
188+
189+
The :file:`examples/ctl/ctl_example.c` source demonstrates how a minimal
190+
provider can expose configuration entries, statistics and runnables through the
191+
CTL API. To add similar support to your own provider or pool:
192+
193+
1. **Store state for CTL** – keep the data you want to expose inside the
194+
provider or pool instance.
195+
2. **Implement an ``ext_ctl`` callback** – parse incoming CTL paths and handle
196+
``CTL_QUERY_READ``, ``CTL_QUERY_WRITE`` and ``CTL_QUERY_RUNNABLE`` requests.
197+
The callback receives a ``umf_ctl_query_source_t`` indicating whether the
198+
query came from the application or a configuration source. Programmatic
199+
calls pass typed binary data, while configuration sources deliver strings
200+
that must be parsed. Wildcards (``{}``) may appear in paths and are supplied
201+
as additional arguments.
202+
3. **Register the callback** – assign the function to the ``ext_ctl`` field of
203+
:type:`umf_memory_provider_ops_t` or :type:`umf_memory_pool_ops_t`.
204+
4. **Interact using CTL** – use :func:`umfCtlSet`, :func:`umfCtlGet` and
205+
:func:`umfCtlExec` from your application or configuration to reach the
206+
new entries.
207+
208+
During initialization UMF will execute ``post_initialize`` on the callback after
209+
applying any queued defaults, allowing the provider or pool to finalize its
210+
state before it is used by the application. The example converts wildcarded
211+
paths into ``printf``-style format strings with ``%s`` and uses ``vsnprintf`` to
212+
resolve the extra arguments. It also shows a helper that accepts integers from
213+
either source, printing the final values from ``post_initialize``.
214+
215+
Building and running the example:
216+
217+
.. code-block:: bash
218+
219+
cmake -B build
220+
cmake --build build
221+
./build/examples/umf_example_ctl
222+
223+
An optional modulus can be supplied via the environment:
224+
225+
.. code-block:: bash
226+
227+
UMF_CONF="umf.provider.default.ctl.m=10" ./build/examples/umf_example_ctl
228+
229+
The example's ``post_initialize`` handler prints the final values loaded from
230+
defaults before the application runs. The same pattern applies to custom memory
231+
pools – implement an ``ext_ctl`` callback in the pool's ops structure and
232+
expose any pool-specific knobs or statistics.

0 commit comments

Comments
 (0)