@@ -34,7 +34,7 @@ defmodule Phoenix.PubSub do
3434 It supports a `:pool_size` option to be given alongside
3535 the name, defaults to `1`. Note the `:pool_size` must
3636 be the same throughout the cluster, therefore don't
37- configure the pool size based on `System.schedulers_online/1`,
37+ configure the pool size based on `System.schedulers_online/0`,
3838 especially if you are using machines with different specs.
3939
4040 * `Phoenix.PubSub.Redis` - uses Redis to exchange data between
@@ -59,6 +59,83 @@ defmodule Phoenix.PubSub do
5959 custom `value` to provide "fastlaning", allowing messages broadcast
6060 to thousands or even millions of users to be encoded once and written
6161 directly to sockets instead of being encoded per channel.
62+
63+ ## Safe pool size migration (when using `Phoenix.PubSub.PG2` adapter)
64+
65+ When you need to change the pool size in a running cluster,
66+ you can use the `broadcast_pool_size` option to ensure no
67+ messages are lost during deployment. This is particularly
68+ important when increasing the pool size.
69+
70+ Here's how to safely increase the pool size from 1 to 2:
71+
72+ 1. Initial state - Current configuration with `pool_size: 1`:
73+ ```
74+ {Phoenix.PubSub, name: :my_pubsub, pool_size: 1}
75+ ```
76+
77+ ```mermaid
78+ graph TD
79+ subgraph "Initial State"
80+ subgraph "Node 1"
81+ A1[Shard 1<br/>Broadcast & Receive]
82+ end
83+ subgraph "Node 2"
84+ B1[Shard 1<br/>Broadcast & Receive]
85+ end
86+ A1 <--> B1
87+ end
88+ ```
89+
90+ 2. First deployment - Set the new pool size but keep broadcasting on the old size:
91+ ```
92+ {Phoenix.PubSub, name: :my_pubsub, pool_size: 2, broadcast_pool_size: 1}
93+ ```
94+
95+ ```mermaid
96+ graph TD
97+ subgraph "First Deployment"
98+ subgraph "Node 1"
99+ A1[Shard 1<br/>Broadcast & Receive]
100+ A2[Shard 2<br/>Broadcast & Receive]
101+ end
102+ subgraph "Node 2"
103+ B1[Shard 1<br/>Broadcast & Receive]
104+ B2[Shard 2<br/>Receive Only]
105+ end
106+ A1 <--> B1
107+ A2 --> B2
108+ end
109+ ```
110+
111+ 3. Final deployment - All nodes running with new pool size:
112+ ```
113+ {Phoenix.PubSub, name: :my_pubsub, pool_size: 2}
114+ ```
115+
116+ ```mermaid
117+ graph TD
118+ subgraph "Final State"
119+ subgraph "Node 1"
120+ A1[Shard 1<br/>Broadcast & Receive]
121+ A2[Shard 2<br/>Broadcast & Receive]
122+ end
123+ subgraph "Node 2"
124+ B1[Shard 1<br/>Broadcast & Receive]
125+ B2[Shard 2<br/>Broadcast & Receive]
126+ end
127+ A1 <--> B1
128+ A2 <--> B2
129+ end
130+ ```
131+
132+ This two-step process ensures that:
133+ - All nodes can receive messages from both old and new pool sizes
134+ - No messages are lost during the transition
135+ - The cluster remains fully functional throughout the deployment
136+
137+ To decrease the pool size, follow the same process in reverse order.
138+
62139 """
63140
64141 @ type node_name :: atom | binary
@@ -87,6 +164,9 @@ defmodule Phoenix.PubSub do
87164 * `:adapter` - the adapter to use (defaults to `Phoenix.PubSub.PG2`)
88165 * `:pool_size` - number of pubsub partitions to launch
89166 (defaults to one partition for every 4 cores)
167+ * `:broadcast_pool_size` - number of pubsub partitions used for broadcasting messages
168+ (defaults to `:pool_size`). This option is used during pool size migrations to ensure
169+ no messages are lost. See the "Safe Pool Size Migration" section in the module documentation.
90170
91171 """
92172 @ spec child_spec ( keyword ) :: Supervisor . child_spec ( )
0 commit comments