Skip to content
This repository was archived by the owner on Jul 10, 2025. It is now read-only.

Commit 1f661b1

Browse files
committed
Add draft RFC for setting GRPC_FAIL_FAST to use_caller by default
1 parent 4a2ea36 commit 1f661b1

File tree

1 file changed

+90
-0
lines changed

1 file changed

+90
-0
lines changed
Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
# GRPC Fail Fast By Default
2+
3+
| Status | Proposed |
4+
| :------------ | :------------------------------------------------------ |
5+
| **RFC #** | [NNN](https://github.com/tensorflow/community/pull/NNN) |
6+
| **Author(s)** | Haoyu Zhang ([email protected]) |
7+
| **Sponsor** | Bramandia Ramadhana ([email protected]) |
8+
| **Updated** | 2021-02-18 |
9+
10+
## Objective
11+
12+
We propose to set the default value of the `GRPC_FAIL_FAST` environment variable
13+
to `use_caller`. This change prevents TensorFlow distributed jobs from hanging
14+
indefinitely due to task failures, and allows users and TF libraries (e.g.,
15+
distribution strategies) to handle the connection errors for better failure and
16+
preemption recovery.
17+
18+
## Background
19+
20+
`GRPC_FAIL_FAST` is a TensorFlow distributed runtime environment variable that
21+
controls the behavior of RPC requests when observing a network disconnection
22+
with remote servers. It can be configured to the following values:
23+
24+
* `true`, which immediately reports an `UnavailableError` when there is a
25+
connection issue for all RPCs, regardless of the per-RPC configurations;
26+
* `false`, which will (in most cases) hang until successfully connected to the
27+
remote server for all RPCs (see
28+
[gRPC `wait_for_ready`](https://github.com/grpc/grpc/blob/master/doc/wait-for-ready.md)),
29+
regardless of the per-RPC configurations;
30+
* `use_caller`, which is `true` for RPCs used in distributed execution (such
31+
as `RecvTensor`, `RunComponentFunction`), and `false` for RPCs in
32+
initializing remote execution environments (e.g., `GetStatus`).
33+
34+
The default value of `GRPC_FAIL_FAST` is set to `false` currently. As a result,
35+
users need to
36+
[manually configure this environment variable](https://github.com/tensorflow/tensorflow/blob/1178262a2a55fa634a2390291fc633c515e28884/tensorflow/python/distribute/parameter_server_strategy_v2.py#L106)
37+
to receive reasonable exceptions when workers fail / get preempted; otherwise
38+
the cluster will hang and cannot recover from failures.
39+
40+
## Proposed Change
41+
42+
We propose to set the default value of `GRPC_FAIL_FAST` to `use_caller`. By
43+
doing so, the runtime reports errors quickly to detect remote server failures
44+
during execution, while still allowing the client to start early and wait for
45+
remote servers to establish initial connections. This should be the desired
46+
behavior for most use cases.
47+
48+
In the context of TensorFlow 2, the default behavior of the following RPCs used
49+
for distributed execution will be changed from hanging on failures (current
50+
behavior) to immediately reporting failures (after the change):
51+
52+
* `EagerService.CreateContext`
53+
* `EagerService.UpdateContext`
54+
* `EagerService.WaitQueueDone`
55+
* `EagerService.KeepAlive`
56+
* `EagerService.Enqueue`
57+
* `EagerService.RunComponentFunction`
58+
* `WorkerService.RecvTensor`
59+
* `WorkerService.RecvBuf`
60+
61+
The default behavior of the following RPC will not change: it will still hang if
62+
the remote task cannot be reached.
63+
64+
* `WorkerService.GetStatus`
65+
66+
The `GetStatus` RPC is typically the first RPC sent from the client to
67+
initialize a distributed execution environment, in both the single- and the
68+
multi-client mode. The underlying implementation uses GRPC's
69+
[`wait_for_ready`](https://github.com/grpc/grpc/blob/master/doc/wait-for-ready.md)
70+
flag, which allows the client to start before the remote server in the
71+
deployment.
72+
73+
## User Impact
74+
75+
Most users should see the new default as expected behaviors in distributed
76+
execution. Users can take advantage of the built-in fault tolerance support in
77+
`ParameterServerStrategy` without having to make changes to the environment
78+
variable configurations. In other setups, exceptions will be raised to the model
79+
training loop code, where users can catch and handle these errors with custom
80+
logic.
81+
82+
Certain users might receive "false alarms" if there are transient connection
83+
errors to the remote servers. We expect this to happen very rarely since GRPC
84+
(built on top of HTTP and TCP protocols) should already handle packet drops and
85+
network flakinesses in most cases, and only report errors when there are real
86+
network or server failures. However, if this does happen to some users, please
87+
set `GRPC_FAIL_FAST=false` to override the default value and revert to the
88+
previous behavior. Please also file an issue to inform the TensorFlow Runtime
89+
team.
90+

0 commit comments

Comments
 (0)