Skip to content

Commit b787259

Browse files
committed
minor updates
1 parent 2dc53c3 commit b787259

File tree

1 file changed

+25
-31
lines changed

1 file changed

+25
-31
lines changed

lectures/likelihood_ratio_process_2.md

Lines changed: 25 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -1013,8 +1013,6 @@ Please write Python code that answers the following questions.
10131013
:class: dropdown
10141014
```
10151015

1016-
1017-
10181016
First, let's write helper functions that compute model components including each agent's subjective belief function.
10191017

10201018
```{code-cell} ipython3
@@ -1312,9 +1310,9 @@ We'll consider two agents:
13121310
* Agent 2: $\pi^g_0 = \pi^f_0 = 1/3$, $\pi^h_0 = 1/3$
13131311
(attaches equal weights to all three models)
13141312
1315-
Let $f$ and $g$ be two beta distributions with $f \sim \text{Beta}(1, 1)$ and
1316-
$g \sim \text{Beta}(3, 1.2)$, and
1317-
set $h = \pi^f_0 f + (1-\pi^f_0) g$.
1313+
Let $f$ and $g$ be two beta distributions with $f \sim \text{Beta}(3, 2)$ and
1314+
$g \sim \text{Beta}(2, 3)$, and
1315+
set $h = \pi^f_0 f + (1-\pi^f_0) g$ with $\pi^f_0 = 0.5$.
13181316
13191317
Bayes' Law tells us that posterior probabilities on models $f$ and $g$ evolve according to
13201318
@@ -1335,9 +1333,6 @@ Please simulate and visualize evolutions of posterior probabilities and consum
13351333
13361334
* Nature permanently draws from $f$
13371335
* Nature permanently draws from $g$
1338-
1339-
1340-
13411336
```
13421337

13431338
```{solution-start} lr_ex6
@@ -1346,7 +1341,7 @@ Please simulate and visualize evolutions of posterior probabilities and consum
13461341

13471342
Let's implement this three-model case with two agents having different beliefs.
13481343

1349-
First, let's define $f$ and $g$ far apart, with $h$ being a mixture of $f$ and $g$.
1344+
Let's define $f$ and $g$ far apart, with $h$ being a mixture of $f$ and $g$.
13501345

13511346
```{code-cell} ipython3
13521347
F_a, F_b = 3, 2
@@ -1447,7 +1442,6 @@ def simulate_three_model_allocation(sequences, f_func, g_func, h_func,
14471442
l_agents_cumul = 1.0
14481443
14491444
# Calculate initial consumption share at t=0
1450-
# (before any observations, likelihood ratio = 1)
14511445
l_agents_seq[n, 0] = 1.0
14521446
c1_share[n, 0] = λ * 1.0 / (1 - λ + λ * 1.0) # This equals λ
14531447
@@ -1481,9 +1475,11 @@ def simulate_three_model_allocation(sequences, f_func, g_func, h_func,
14811475
14821476
# Compute mixture densities
14831477
m1_t = compute_mixture_density(
1484-
π_f_1, π_g_1, π_h_1, densities['f'], densities['g'], densities['h'])
1478+
π_f_1, π_g_1, π_h_1, densities['f'],
1479+
densities['g'], densities['h'])
14851480
m2_t = compute_mixture_density(
1486-
π_f_2, π_g_2, π_h_2, densities['f'], densities['g'], densities['h'])
1481+
π_f_2, π_g_2, π_h_2, densities['f'],
1482+
densities['g'], densities['h'])
14871483
14881484
# Update cumulative likelihood ratio between agents
14891485
l_agents_cumul *= (m1_t / m2_t)
@@ -1511,19 +1507,24 @@ The following code cell defines a plotting function to show evolutions of belief
15111507
15121508
def plot_belief_evolution(results, nature='f', figsize=(15, 5)):
15131509
"""
1514-
Create plots showing belief evolution for three models (f, g, h) for both agents.
1510+
Create plots showing belief evolution for three models (f, g, h).
15151511
"""
15161512
fig, axes = plt.subplots(1, 3, figsize=figsize)
15171513
15181514
model_names = ['f', 'g', 'h']
1519-
belief_keys = [('π_f_1', 'π_f_2'), ('π_g_1', 'π_g_2'), ('π_h_1', 'π_h_2')]
1515+
belief_keys = [('π_f_1', 'π_f_2'),
1516+
('π_g_1', 'π_g_2'),
1517+
('π_h_1', 'π_h_2')]
15201518
1521-
for j, (model_name, (key1, key2)) in enumerate(zip(model_names, belief_keys)):
1519+
for j, (model_name, (key1, key2)) in enumerate(
1520+
zip(model_names, belief_keys)):
15221521
ax = axes[j]
15231522
15241523
# Plot agent beliefs
1525-
ax.plot(np.median(results[key1], axis=0), 'C0-', linewidth=2, label='agent 1')
1526-
ax.plot(np.median(results[key2], axis=0), 'C1-', linewidth=2, label='agent 2')
1524+
ax.plot(np.median(results[key1], axis=0), 'C0-',
1525+
linewidth=2, label='agent 1')
1526+
ax.plot(np.median(results[key2], axis=0), 'C1-',
1527+
linewidth=2, label='agent 2')
15271528
15281529
# Truth indicator
15291530
if model_name == nature:
@@ -1545,15 +1546,16 @@ def plot_belief_evolution(results, nature='f', figsize=(15, 5)):
15451546
15461547
def plot_consumption_dynamics(results_f, results_g, λ=0.5, figsize=(14, 5)):
15471548
"""
1548-
Create plot showing consumption share dynamics for agent 1 for both nature states.
1549+
Create plot showing consumption share dynamics for agent 1.
15491550
"""
15501551
fig, axes = plt.subplots(1, 2, figsize=figsize)
15511552
15521553
results_list = [results_f, results_g]
15531554
nature_labels = ['f', 'g']
15541555
colors = ['blue', 'green']
15551556
1556-
for i, (results, nature_label, color) in enumerate(zip(results_list, nature_labels, colors)):
1557+
for i, (results, nature_label, color) in enumerate(
1558+
zip(results_list, nature_labels, colors)):
15571559
ax = axes[i]
15581560
c1 = results['c1_share']
15591561
c1_med = np.median(c1, axis=0)
@@ -1610,28 +1612,21 @@ plot_belief_evolution(results_f, nature='f', figsize=(15, 5))
16101612
plt.show()
16111613
```
16121614

1613-
Agent 1's posterior probabilities are depicted in blue and agent 2's posterior beliefs are depicted in orange.
1615+
Agent 1's posterior beliefs are depicted in blue and agent 2's posterior beliefs are depicted in orange.
16141616

16151617
Evidently, when nature draws from $f$, agent 1 learns faster than agent 2, who, unlike agent 1, attaches a positive prior probability to model $h$:
16161618

16171619
- In the leftmost panel, both agents' beliefs for $\pi(f)$ converge toward 1 (the truth)
1618-
- Agent 1 learns faster than agent 2
16191620
- Agent 2's belief in model $h$ (rightmost panel) gradually converges to 0 after an initial rise
16201621

1621-
Now let's plot the belief evolution when nature = g:
1622+
Now let's plot the belief evolution when nature chooses $g$:
16221623

16231624
```{code-cell} ipython3
16241625
plot_belief_evolution(results_g, nature='g', figsize=(15, 5))
16251626
plt.show()
16261627
```
16271628

1628-
Again, agent 1 learns faster than agent 2:
1629-
1630-
Note the difference in the convergence speed when nature draws from $f$ and $g$.
1631-
1632-
The time it takes for agent 2 to "catch up" is longer when nature draws from $g$.
1633-
1634-
This is because agent 1's prior is closer to the truth when nature draws from $g$
1629+
Again, agent 1 learns faster than agent 2.
16351630

16361631
Before reading the next figure, please guess how consumption shares evolve.
16371632

@@ -1663,7 +1658,7 @@ Consider the same setup as the previous exercise, but now:
16631658
* Agent 2: $\pi^g_0 = \pi^f_0 = 0$ (rigid belief in model $h$)
16641659
16651660
Choose $h$ to be close but not equal to either $f$ or $g$ as measured by KL divergence.
1666-
For example, set $h \sim \text{Beta}(1.2, 1.1)$.
1661+
For example, set $h \sim \text{Beta}(1.2, 1.1)$ and $f \sim \text{Beta}(1, 1)$.
16671662
16681663
Please simulate and visualize evolutions of posterior probabilities and consumption allocations when:
16691664
@@ -1701,7 +1696,6 @@ print(f"KL(f,h) = {Kf_h:.4f}, KL(g,h) = {Kg_h:.4f}")
17011696
Now we can set the belief models for the two agents
17021697

17031698
```{code-cell} ipython3
1704-
# Set extreme priors
17051699
ε = 0.01
17061700
λ = 0.5
17071701

0 commit comments

Comments
 (0)