11---
2- jupytext :
3- text_representation :
4- extension : .md
5- format_name : myst
6- format_version : 0.13
7- jupytext_version : 1.17.2
8- kernelspec :
9- display_name : Python 3 (ipykernel)
10- language : python
11- name : python3
2+ jupyter :
3+ jupytext :
4+ default_lexer : ipython3
5+ text_representation :
6+ extension : .md
7+ format_name : markdown
8+ format_version : ' 1.3'
9+ jupytext_version : 1.17.2
10+ kernelspec :
11+ display_name : Python 3 (ipykernel)
12+ language : python
13+ name : python3
1214---
1315
1416(mccall_with_sep_markov)=
@@ -89,14 +91,14 @@ When unemployed and receiving wage offer $w$, the agent chooses between:
8991The unemployed worker's value function satisfies the Bellman equation
9092
9193$$
92- v_u(w) = \max\{v_e(w), c + \beta \sum_{w'} v_u(w') P(w,w')\}
94+ v_u(w) = \max\{v_e(w), u(c) + \beta \sum_{w'} v_u(w') P(w,w')\}
9395$$
9496
9597The employed worker's value function satisfies the Bellman equation
9698
9799$$
98100 v_e(w) =
99- w + \beta
101+ u(w) + \beta
100102 \left[
101103 \alpha \sum_{w'} v_u(w') P(w,w') + (1-\alpha) v_e(w)
102104 \right]
@@ -114,7 +116,7 @@ We use the following approach to solve this problem.
114116
115117$$
116118 v_e(w) =
117- \frac{1}{1-\beta(1-\alpha)} \cdot (w + \alpha\beta(Pv_u)(w))
119+ \frac{1}{1-\beta(1-\alpha)} \cdot (u(w) + \alpha\beta(Pv_u)(w))
118120$$
119121
1201222 . Substitute into the unemployed agent's Bellman equation to get:
124126 v_u(w) =
125127 \max
126128 \left\{
127- \frac{1}{1-\beta(1-\alpha)} \cdot (w + \alpha\beta(Pv_u)(w)),
128- c + \beta(Pv_u)(w)
129+ \frac{1}{1-\beta(1-\alpha)} \cdot (u(w) + \alpha\beta(Pv_u)(w)),
130+ u(c) + \beta(Pv_u)(w)
129131 \right\}
130132$$
131133
1321343 . Use value function iteration to solve for $v_u$
133135
134- 4 . Compute optimal policy: accept if $v_e(w) ≥ c + β(Pv_u)(w)$
136+ 4 . Compute optimal policy: accept if $v_e(w) ≥ u(c) + β(Pv_u)(w)$
135137
136138The optimal policy turns out to be a reservation wage strategy: accept all wages above some threshold.
137139
138140
139141## Code
140142
143+ The default utility function is a CRRA utility function
144+
145+ ``` {code-cell} ipython3
146+ def u(c, γ):
147+ return (c**(1 - γ) - 1) / (1 - γ)
148+ ```
149+
141150Let's set up a ` Model ` class to store information needed to solve the model.
142151
143152We include ` P_cumsum ` , the row-wise cumulative sum of the transition matrix, to
@@ -152,18 +161,25 @@ class Model(NamedTuple):
152161 β: float
153162 c: float
154163 α: float
164+ γ: float
155165```
156166
157167The function below holds default values and creates a ` Model ` instance:
158168
169+ The wage offer process will be formed as the exponential of the discretization of an AR1 process.
170+
171+ * discretize a Gaussian AR1 process of the form $X' = \rho X + \nu Z'$
172+ * take the exponential of the resulting process
173+
159174``` {code-cell} ipython3
160175def create_js_with_sep_model(
161176 n: int = 200, # wage grid size
162177 ρ: float = 0.9, # wage persistence
163178 ν: float = 0.2, # wage volatility
164179 β: float = 0.96, # discount factor
165180 α: float = 0.05, # separation rate
166- c: float = 1.0 # unemployment compensation
181+ c: float = 1.0, # unemployment compensation
182+ γ: float = 1.5 # utility parameter
167183 ) -> Model:
168184 """
169185 Creates an instance of the job search model with separation.
@@ -172,18 +188,18 @@ def create_js_with_sep_model(
172188 mc = tauchen(n, ρ, ν)
173189 w_vals, P = jnp.exp(jnp.array(mc.state_values)), jnp.array(mc.P)
174190 P_cumsum = jnp.cumsum(P, axis=1)
175- return Model(n, w_vals, P, P_cumsum, β, c, α)
191+ return Model(n, w_vals, P, P_cumsum, β, c, α, γ )
176192```
177193
178194Here's the Bellman operator for the unemployed worker's value function:
179195
180196``` {code-cell} ipython3
181197def T(v: jnp.ndarray, model: Model) -> jnp.ndarray:
182198 """The Bellman operator for the value of being unemployed."""
183- n, w_vals, P, P_cumsum, β, c, α = model
199+ n, w_vals, P, P_cumsum, β, c, α, γ = model
184200 d = 1 / (1 - β * (1 - α))
185- accept = d * (w_vals + α * β * P @ v)
186- reject = c + β * P @ v
201+ accept = d * (u( w_vals, γ) + α * β * P @ v)
202+ reject = u(c, γ) + β * P @ v
187203 return jnp.maximum(accept, reject)
188204```
189205
@@ -193,10 +209,10 @@ the value function:
193209``` {code-cell} ipython3
194210def get_greedy(v: jnp.ndarray, model: Model) -> jnp.ndarray:
195211 """Get a v-greedy policy."""
196- n, w_vals, P, P_cumsum, β, c, α = model
212+ n, w_vals, P, P_cumsum, β, c, α, γ = model
197213 d = 1 / (1 - β * (1 - α))
198- accept = d * (w_vals + α * β * P @ v)
199- reject = c + β * P @ v
214+ accept = d * (u( w_vals, γ) + α * β * P @ v)
215+ reject = u(c, γ) + β * P @ v
200216 σ = accept >= reject
201217 return σ
202218```
@@ -247,7 +263,7 @@ def get_reservation_wage(σ: jnp.ndarray, model: Model) -> float:
247263 Returns:
248264 - Reservation wage (lowest wage for which policy indicates acceptance)
249265 """
250- n, w_vals, P, P_cumsum, β, c, α = model
266+ n, w_vals, P, P_cumsum, β, c, α, γ = model
251267
252268 # Find the first index where policy indicates acceptance
253269 # σ is a boolean array, argmax returns the first True value
@@ -264,7 +280,7 @@ Let's solve the model:
264280
265281``` {code-cell} ipython3
266282model = create_js_with_sep_model()
267- n, w_vals, P, P_cumsum, β, c, α = model
283+ n, w_vals, P, P_cumsum, β, c, α, γ = model
268284v_star = vfi(model)
269285σ_star = get_greedy(v_star, model)
270286```
@@ -273,8 +289,8 @@ Next we compute some related quantities, including the reservation wage.
273289
274290``` {code-cell} ipython3
275291d = 1 / (1 - β * (1 - α))
276- accept = d * (w_vals + α * β * P @ v_star)
277- h_star = c + β * P @ v_star
292+ accept = d * (u( w_vals, γ) + α * β * P @ v_star)
293+ h_star = u(c, γ) + β * P @ v_star
278294w_star = get_reservation_wage(σ_star, model)
279295```
280296
@@ -346,7 +362,7 @@ def update_agent(key, is_employed, wage_idx, model, σ):
346362 period via the probabilites in P(w, .)
347363
348364 """
349- n, w_vals, P, P_cumsum, β, c, α = model
365+ n, w_vals, P, P_cumsum, β, c, α, γ = model
350366
351367 key1, key2 = jax.random.split(key)
352368 # Use precomputed cumulative sum for efficient sampling
@@ -390,7 +406,7 @@ def simulate_employment_path(
390406 """
391407 key = jax.random.PRNGKey(seed)
392408 # Unpack model
393- n, w_vals, P, P_cumsum, β, c, α = model
409+ n, w_vals, P, P_cumsum, β, c, α, γ = model
394410
395411 # Initial conditions
396412 is_employed = 0
@@ -556,7 +572,7 @@ def _simulate_cross_section_compiled(
556572 ):
557573 """JIT-compiled core simulation loop using lax.fori_loop.
558574 Returns only the final employment state to save memory."""
559- n, w_vals, P, P_cumsum, β, c, α = model
575+ n, w_vals, P, P_cumsum, β, c, α, γ = model
560576
561577 # Initialize arrays
562578 wage_indices = jnp.zeros(n_agents, dtype=jnp.int32)
@@ -699,7 +715,6 @@ model_low_c = create_js_with_sep_model(c=0.5)
699715plot_cross_sectional_unemployment(model_low_c)
700716```
701717
702-
703718## Exercises
704719
705720``` {exercise-start}
0 commit comments