-
Notifications
You must be signed in to change notification settings - Fork 29
Resource control based on cgroup #7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 8 commits
8c26c3d
938d551
d22ad52
909000e
547f9ef
e30456c
ee05214
fa5c919
afc9e0a
19010da
64e8569
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,202 @@ | ||
## Summary | ||
### General Motivation | ||
|
||
Currently, we don't control and limit the resource usage of worker processes, except running the worker in a container (see the `container` part in the [doc](https://docs.ray.io/en/latest/ray-core/handling-dependencies.html#api-reference)). In most of the scenarios, the container is unnecessary, but resource control is necessary for isolation. | ||
|
||
[Control groups](https://man7.org/linux/man-pages/man7/cgroups.7.html), usually referred to as cgroups, are a Linux kernel feature which allow processes to be organized into hierarchical groups whose usage of various types of resources can then be limited and monitored. | ||
|
||
So, the goal of this proposal is to achieve resource control for worker processes by cgroup in Linux. | ||
|
||
### Should this change be within `ray` or outside? | ||
|
||
These changes would be within Ray Core. | ||
|
||
## Stewardship | ||
### Required Reviewers | ||
@ericl, @edoakes, @simon-mo, @chenk008 @raulchen | ||
|
||
### Shepherd of the Proposal (should be a senior committer) | ||
@ericl | ||
ericl marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
## Design and Architecture | ||
|
||
### Cluster level API | ||
We should add some new system configs for the resource control. | ||
SongGuyang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
- worker_resource_control_method: Set to "cgroup" by default. | ||
SongGuyang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
- cgroup_manager: Set to "cgroupfs" by default. We should also support `systemd`. | ||
- cgroup_mount_path: Set to `/sys/fs/cgroup/` by default. | ||
- cgroup_unified_hierarchy: Whether to use cgroup v2 or not. Set to `False` by default. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. can we only support cgroup v2 to reduce the scope? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we should also support cgroup v1 because it has been used widely and written into standard. I have described here https://github.com/ray-project/enhancements/pull/7/files#diff-98ccfc9582e95581aae234797bc273b2fb68cb9e4dcc3030c8e94ba447daef7dR112-R113. |
||
- cgroup_use_cpuset: Whether to use cpuset. Set to `False` by default. | ||
|
||
### User level API | ||
Users could turn on/off resource control by setting the relevant fields of runtime env (in job level or task/actor level). | ||
#### Simple API using a flag | ||
```python | ||
runtime_env = { | ||
"enable_resource_control": True, | ||
SongGuyang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
} | ||
``` | ||
|
||
```python | ||
from ray.runtime_env import RuntimeEnv | ||
runtime_env = ray.runtime_env.RuntimeEnv( | ||
enable_resource_control=True | ||
) | ||
``` | ||
|
||
#### Entire APIs | ||
```python | ||
runtime_env = { | ||
"enable_resource_control": True, | ||
"resource_control_config": { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is cgroup version automatically detected? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I found a util function in runc https://github.com/opencontainers/runc/blob/main/libcontainer/cgroups/utils.go#L34-L50. But cgroup v1 and v2 could both be enabled in some systems. And it also depends on which sub systems(e.g. cpu, memory) has been enable in the cgroup. So, I'd like to add a config in first step. We can also change this in future. |
||
"cpu_enabled": True, | ||
"memory_enabled": True, | ||
"cpu_strict_usage": False, | ||
"memory_strict_usage": True, | ||
SongGuyang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
} | ||
} | ||
``` | ||
|
||
```python | ||
from ray.runtime_env import RuntimeEnv, ResourceControlConfig | ||
SongGuyang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
runtime_env = ray.runtime_env.RuntimeEnv( | ||
enable_resource_control=True, | ||
resource_control_config=ResourceControlConfig( | ||
SongGuyang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
cpu_enabled=True, memory_enabled=True, cpu_strict_usage=False, memory_strict_usage=True) | ||
) | ||
``` | ||
|
||
### How to manage cgroups | ||
#### cgroup versions | ||
The cgroup has two versions: cgroup v1 and cgroup v2. [Cgroup v2](https://www.kernel.org/doc/Documentation/cgroup-v2.txt) was made official with the release of Linux 4.5. But only a few systems are known to use croup v2 by default(refer to [here](https://rootlesscontaine.rs/getting-started/common/cgroup2/)): Fedora (since 31), Arch Linux (since April 2021), openSUSE Tumbleweed (since c. 2021), Debian GNU/Linux (since 11) and Ubuntu (since 21.10). For other Linux distributions, even if they use newer Linux kernels, users still need to change a configuration (see below) and reboot the system to enable cgroup v2. | ||
|
||
Check if cgroup v2 has been enabled in your linux system: | ||
``` | ||
mount | grep '^cgroup' | awk '{print $1}' | uniq | ||
``` | ||
|
||
And you can try to enable cgroup v2: | ||
``` | ||
grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=1" | ||
reboot | ||
``` | ||
|
||
Overall, cgroup v2 has better design and is easier to use, but we should also support cgroup v1 because v1 is widely used and has been hard coded to the [OCI](https://opencontainers.org/) standards. See this [blog](https://www.redhat.com/sysadmin/fedora-31-control-group-v2) for more information. | ||
|
||
#### cgroupfs | ||
The traditional way to manage cgroups is writing the file system directly. It is usually referred to as cgroupfs, like: | ||
``` | ||
mkdir /sys/fs/cgroup/{worker_id} # Create cgroup | ||
echo "200000 1000000" > /sys/fs/cgroup/foo/cpu.max # Set cpu quota | ||
echo {pid} > /sys/fs/cgroup/foo/cgroup.procs # Bind process. | ||
``` | ||
NOTE: This is an example based on cgroup v2. The commmand lines in cgroup v1 is defferent and incompatible. | ||
|
||
We can also use the [libcgroup](https://github.com/libcgroup/libcgroup/blob/main/README) to simplify the implementation. This library support both cgroup v1 and cgroup v2. | ||
|
||
#### systemd | ||
Systemd, the init daemon of most of linux hosts, is also a cgroup driver which can use cgroup to manage processes. It provide a wrapped way to bind process and cgroup, so that we don't have to manually manage the cgroup files. For example: | ||
``` | ||
systemctl set-property {worker_id}.service CPUQuota=20% # Bind process and cgroup automatically. | ||
systemctl start {worker_id}.service | ||
``` | ||
|
||
NOTE: The entire config options is [here](https://man7.org/linux/man-pages/man5/systemd.resource-control.5.html). And we can also use `StartTransientUnit` to create cgroup with worker process simply. This is a [dbus](https://www.freedesktop.org/wiki/Software/systemd/dbus/) API and there is a [dbus-python](https://dbus.freedesktop.org/doc/dbus-python/) python lib that we can use. | ||
|
||
#### Why we should support both cgroupfs and systemd | ||
The cgroupfs and the systemd are two mainstream ways to manage cgroups. In [container technology](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/), this two ways are also supported. We should also support both of them. | ||
|
||
The reason of using cgroupfs: | ||
- Cgroupfs is a traditional and common way to manage cgroups. If your login user has the privilege of cgroup file system, you can create the cgroups. | ||
- In some environments, you couldn't use systemd, e.g. in a non-systemd based system or in a container which couldn't access the systemd service. | ||
|
||
The reason of using systemd: | ||
- Systemd is highly recommended by [runc](https://github.com/opencontainers/runc/blob/main/docs/cgroup-v2.md.) for cgroup v2. | ||
- If we use cgroupfs in a systemd based system, there will be more than one components which simultaneously manage the cgroup tree. | ||
- In the systemd [docs](https://www.freedesktop.org/wiki/Software/systemd/ControlGroupInterface/), we can know that, in future, create/delete cgroups will be unavailable to userspace applications, unless done via systemd's APIs. | ||
- The systemd has a good abstract API of cgroups. | ||
|
||
SongGuyang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
So, we should support both cgroupfs and systemd. And we can provide a config which is used to change the cgroup manager. | ||
|
||
### Changes needed in Ray | ||
SongGuyang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
1. Add a Cgroup manager module in the Dashboard Agent | ||
|
||
The Cgroup Manager is used to create or delete cgroups, and bind worker processes to cgroups. We plan to integrate the Cgroup Manager in the Agent. | ||
|
||
Cgroup Manager should expose an abstract interface that can hide the differences between cgroupfs/systemd and cgroup v1/v2. | ||
|
||
2. Generate the command line of cgroup. | ||
|
||
Agent should generete the command line from cgroup manager. The command line will be append to the `command_prefix` field of runtime env context. A command line sample is like: | ||
|
||
``` | ||
mkdir /sys/fs/cgroup/{worker_id} && echo "200000 1000000" > /sys/fs/cgroup/foo/cpu.max && echo {pid} > /sys/fs/cgroup/foo/cgroup.procs | ||
``` | ||
|
||
3. Start worker process with the command line. | ||
|
||
Currently, we use `setup_worker.py` to enforce the runtime environments. `setup_worker.py` will merge the `command_prefix` of runtime env context and real worker command. In the same way, the command about cgroup will be run with worker process. | ||
|
||
4. Delete cgroup. | ||
|
||
When worker processes die, we should delete the cgroup which is created for the processes. This work is only needed for cgroupfs because systemd can automatically delete the cgroup when the process dies. | ||
|
||
So, we should add a new RPC named `CleanForDeadWorker` in `RuntimeEnvService`. The Raylet should send this PRC to the Agent and the Agent will delete the cgroup. | ||
SongGuyang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
``` | ||
message CleanForDeadWorkerRequest { | ||
string worker_id = 1; | ||
} | ||
|
||
message CleanForDeadWorkerReply { | ||
AgentRpcStatus status = 1; | ||
string error_message = 2; | ||
} | ||
|
||
service RuntimeEnvService { | ||
... | ||
rpc CleanForDeadWorker(CleanForDeadWorkerRequest) | ||
returns (CleanForDeadWorkerReply); | ||
} | ||
``` | ||
|
||
## Compatibility, Deprecation, and Migration Plan | ||
|
||
This proposal will not change any existing APIs and any default behaviors of Ray Core. | ||
|
||
## Test Plan and Acceptance Criteria | ||
|
||
We plan to benchmark the resouces of worker processes: | ||
- CPU soft control: The worker could use idle CPU times which exceeds the CPU quota. | ||
- CPU hard control: Set the config `cpu_strict_usage=True`. The worker couldn't exceed the CPU quota. | ||
- Memory soft control: The worker could use idle memory which exceeds the memory quota. | ||
- Memory hard control: Set the config `memory_strict_usage=True`. The worker couldn't exceed the memory quota. | ||
|
||
Acceptance criteria: | ||
- A set of reasonable APIs. | ||
- A set of reasonable benchmark results. | ||
|
||
## (Optional) Follow-on Work | ||
|
||
In the first version, we can only support one cgroup manager based on cgroupfs. We can support systemd based cgroup manager in future. | ||
And we can achieve control for more resources, like `blkio`, `devices`, and `net_cls`. | ||
|
||
## Appendix | ||
### The Work steps of resource control based on cgroup. | ||
|
||
When we run `ray.init` with a `runtime_env` and `eager_install` is enabled, the main steps are: | ||
- (**Step 1**) The Raylet(Worker Pool) receives the publishing message of job started. | ||
- (**Step 2**) The Raylet sends the RPC `GetOrCreateRuntimeEnv` to the Agent. | ||
- (**Step 3**) The Agent setups the `runtime_env`. For `resource_control`, the agent generates `command_prefix` in `runtime_env_context` to talk how to enable the resource control. | ||
|
||
When we create a `Task` or `Actor` with a `runtime_env`(or a inherited `runtime_env`), the main steps are: | ||
- (**Step 4**) The worker submits the task. | ||
- (**Step 5**) The `task_spec` is received by the Raylet after scheduling. | ||
- (**Step 6**) The Raylet sends the RPC `GetOrCreateRuntimeEnv` to the Agent. | ||
- (**Step 7**) The agent generates `command_prefix` in `runtime_env_context` and replies the RPC. | ||
- (**Step 8**) The Raylet starts the new worker process with the `runtime_env_context`. | ||
- (**Step 9**) The `setup_worker.py` setups the `resource_control` by `command_prefix` for the new worker process. | ||
|
||
### More references | ||
- [Yarn NodeManagerCgroups](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html) |
Uh oh!
There was an error while loading. Please reload this page.