Skip to content

Commit bd3b00c

Browse files
committed
updates
1 parent 623db7c commit bd3b00c

File tree

1 file changed

+119
-84
lines changed

1 file changed

+119
-84
lines changed

lectures/likelihood_ratio_process_2.md

Lines changed: 119 additions & 84 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ jupytext:
44
extension: .md
55
format_name: myst
66
format_version: 0.13
7-
jupytext_version: 1.16.6
7+
jupytext_version: 1.17.1
88
kernelspec:
99
display_name: Python 3 (ipykernel)
1010
language: python
@@ -1320,14 +1320,14 @@ Bayes' Law tells us that posterior probabilities on models $f$ and $g$ evolve ac
13201320
13211321
$$
13221322
\pi^f(s^t) := \frac{\pi^f_0 f(s^t)}{\pi^f_0 f(s^t)
1323-
+ \pi^g(s^t) g(s^t) + (1 - \pi^f_0 - \pi^g_0) h(s^t)}
1323+
+ \pi^g_0 g(s^t) + (1 - \pi^f_0 - \pi^g_0) h(s^t)}
13241324
$$
13251325
13261326
and
13271327
13281328
$$
13291329
\pi^g(s^t) := \frac{\pi^g_0 g(s^t)}{\pi^f_0 f(s^t)
1330-
+ \pi^g(s^t) g(s^t) + (1 - \pi^f_0 - \pi^g_0) h(s^t)}
1330+
+ \pi^g_0 g(s^t) + (1 - \pi^f_0 - \pi^g_0) h(s^t)}
13311331
$$
13321332
13331333
@@ -1346,7 +1346,7 @@ Please simulate and visualize evolutions of posterior probabilities and consum
13461346

13471347
Let's implement this three-model case with two agents having different beliefs.
13481348

1349-
First, let's define $f$ and $g$ far apart, with $h$ being a mixture of $f$ and $g$
1349+
First, let's define $f$ and $g$ far apart, with $h$ being a mixture of $f$ and $g$.
13501350

13511351
```{code-cell} ipython3
13521352
F_a, F_b = 1, 1
@@ -1359,7 +1359,7 @@ g = jit(lambda x: p(x, G_a, G_b))
13591359
h = jit(lambda x: π_f_0 * f(x) + (1 - π_f_0) * g(x))
13601360
```
13611361

1362-
Now we can define the belief updating for the three-agent model
1362+
Now we can define the belief updating for the model
13631363

13641364
```{code-cell} ipython3
13651365
@jit(parallel=True)
@@ -1404,7 +1404,7 @@ def compute_posterior_three_models(
14041404
return π_f, π_g, π_h
14051405
```
14061406

1407-
Let's also write simulation code along lines similar to earlier exercises
1407+
Let's also write simulation code along the lines of earlier exercises
14081408

14091409
```{code-cell} ipython3
14101410
@jit(parallel=True)
@@ -1454,83 +1454,91 @@ The following code cell defines a plotting function to show evolutions of belief
14541454
```{code-cell} ipython3
14551455
:tags: [hide-input]
14561456
1457-
def plot_three_model_results(c1_data, π_data, nature_labels, λ=0.5,
1458-
agent_labels=None, figsize=(12, 10)):
1457+
def plot_belief_evolution(π_data, nature_labels, figsize=(15, 5)):
14591458
"""
1460-
Create plots for three-model exercises.
1459+
Create plots showing belief evolution for three models (f, g, h) for both agents.
1460+
Each row corresponds to a different nature scenario.
14611461
"""
14621462
n_scenarios = len(nature_labels)
1463-
fig, axes = plt.subplots(2, n_scenarios, figsize=figsize)
1463+
fig, axes = plt.subplots(n_scenarios, 3, figsize=figsize)
14641464
if n_scenarios == 1:
1465-
axes = axes.reshape(2, 1)
1465+
axes = axes.reshape(1, 3)
14661466
1467-
colors = ['blue', 'green', 'orange']
1467+
model_names = ['f', 'g', 'h']
14681468
1469-
for i, (nature_label, c1, π_tuple) in enumerate(
1470-
zip(nature_labels, c1_data, π_data)):
1469+
for i, (nature_label, π_tuple) in enumerate(zip(nature_labels, π_data)):
14711470
πf1, πg1, πh1, πf2, πg2, πh2 = π_tuple
1472-
1473-
ax = axes[i, 0]
1474-
ax.plot(np.median(πf1, axis=0), 'C0-', linewidth=2)
1475-
ax.plot(np.median(πg1, axis=0), 'C0--', linewidth=2)
1476-
ax.plot(np.median(πh1, axis=0), 'C0:', linewidth=2)
1477-
ax.plot(np.median(πf2, axis=0), 'C1-', linewidth=2)
1478-
ax.plot(np.median(πg2, axis=0), 'C1--', linewidth=2)
1479-
ax.plot(np.median(πh2, axis=0), 'C1:', linewidth=2)
1480-
1481-
# Truth indicator
1482-
truth_val = 1.0 if nature_label == 'f' else (
1483-
1.0 if nature_label == 'g' else 0.0)
1484-
ax.axhline(y=truth_val, color='grey', linestyle='-.', alpha=0.7)
1471+
π_data_models = [(πf1, πf2), (πg1, πg2), (πh1, πh2)]
14851472
1486-
ax.set_title(f'Beliefs when Nature = {nature_label}')
1487-
ax.set_xlabel('$t$')
1488-
ax.set_ylabel(r'median $\pi(\cdot)$')
1489-
ax.set_ylim([-0.01, 1.01])
1490-
1491-
if i == 0:
1492-
from matplotlib.lines import Line2D
1493-
1494-
# Agent colors legend
1495-
agent_elements = [
1496-
Line2D([0], [0], color='C0', linewidth=2, label='agent 1'),
1497-
Line2D([0], [0], color='C1', linewidth=2, label='agent 2')
1498-
]
1499-
agent_legend = ax.legend(handles=agent_elements, loc='upper left')
1473+
for j, (model_name, (π1, π2)) in enumerate(zip(model_names, π_data_models)):
1474+
ax = axes[i, j]
15001475
1501-
# Line styles legend
1502-
style_elements = [
1503-
Line2D([0], [0], color='black',
1504-
linestyle='-', label='π(f)'),
1505-
Line2D([0], [0], color='black',
1506-
linestyle='--', label='π(g)'),
1507-
Line2D([0], [0], color='black',
1508-
linestyle=':', label='π(h)'),
1509-
Line2D([0], [0], color='grey',
1510-
linestyle='-.', alpha=0.7, label='truth')
1511-
]
1512-
ax.legend(handles=style_elements, loc='upper right')
1476+
# Plot agent beliefs
1477+
ax.plot(np.median(π1, axis=0), 'C0-', linewidth=2, label='agent 1')
1478+
ax.plot(np.median(π2, axis=0), 'C1-', linewidth=2, label='agent 2')
15131479
1514-
ax.add_artist(agent_legend)
1480+
# Truth indicator
1481+
if nature_label == model_name:
1482+
ax.axhline(y=1.0, color='grey', linestyle='-.',
1483+
alpha=0.7, label='truth')
1484+
else:
1485+
ax.axhline(y=0.0, color='grey', linestyle='-.',
1486+
alpha=0.7, label='truth')
15151487
1516-
ax = axes[i, 1]
1488+
ax.set_title(f'π({model_name}) when Nature = {nature_label}')
1489+
ax.set_xlabel('$t$')
1490+
ax.set_ylabel(f'median π({model_name})')
1491+
ax.set_ylim([-0.01, 1.01])
1492+
ax.legend(loc='best')
1493+
1494+
plt.tight_layout()
1495+
return fig, axes
1496+
1497+
1498+
def plot_consumption_dynamics(c1_data, nature_labels, λ=0.5, figsize=(12, 4)):
1499+
"""
1500+
Create plots showing consumption share dynamics for agent 1.
1501+
"""
1502+
n_scenarios = len(nature_labels)
1503+
fig, axes = plt.subplots(1, n_scenarios, figsize=figsize)
1504+
if n_scenarios == 1:
1505+
axes = [axes]
1506+
1507+
colors = ['blue', 'green', 'orange']
1508+
1509+
for i, (nature_label, c1) in enumerate(zip(nature_labels, c1_data)):
1510+
ax = axes[i]
15171511
c1_med = np.median(c1, axis=0)
1518-
ax.plot(c1_med, color=colors[i], linewidth=2, label="median")
1519-
ax.axhline(y=0.5, color='grey', linestyle='--', alpha=0.5)
1520-
ax.set_title(
1521-
f'Agent 1 consumption share (Nature = {nature_label})')
1522-
ax.set_xlabel('t')
1523-
ax.set_ylabel("median consumption share")
1512+
1513+
# Plot median and percentiles
1514+
ax.plot(c1_med, color=colors[i % len(colors)],
1515+
linewidth=2, label="median")
1516+
1517+
# Add percentile bands
1518+
c1_25 = np.percentile(c1, 25, axis=0)
1519+
c1_75 = np.percentile(c1, 75, axis=0)
1520+
ax.fill_between(range(len(c1_med)), c1_25, c1_75,
1521+
color=colors[i % len(colors)], alpha=0.2,
1522+
label="25-75 percentile")
1523+
1524+
ax.axhline(y=0.5, color='grey', linestyle='--',
1525+
alpha=0.5, label='equal share')
1526+
ax.axhline(y=λ, color='red', linestyle=':',
1527+
alpha=0.5, label=f'initial share (λ={λ})')
1528+
1529+
ax.set_title(f'Agent 1 consumption share (Nature = {nature_label})')
1530+
ax.set_xlabel('$t$')
1531+
ax.set_ylabel("consumption share")
15241532
ax.set_ylim([-0.01, 1.01])
1525-
ax.legend()
1533+
ax.legend(loc='best')
15261534
15271535
plt.tight_layout()
15281536
return fig, axes
15291537
```
15301538

15311539
Now let's run the simulation.
15321540

1533-
In the simulation below, agent 1 assigns positive probabilities only to $f$ and $g$, while agent 2 puts equal weights on all three models.
1541+
In the simulation below, agent 1 assigns positive probabilities only to $f$ and $g$, while agent 2 puts equal weights on all three models.
15341542

15351543
```{code-cell} ipython3
15361544
T = 100
@@ -1551,24 +1559,51 @@ c1_data = [results_f[0], results_g[0]]
15511559
π_data = [results_f[1:], results_g[1:]]
15521560
nature_labels = ['f', 'g']
15531561
1554-
fig, axes = plot_three_model_results(c1_data, π_data, nature_labels, λ)
1555-
plt.show()
1562+
plot_belief_evolution(π_data, nature_labels, figsize=(15, 5*len(nature_labels)))
1563+
plt.plot();
15561564
```
15571565

1558-
Agent 1's posterior probabilities are depicted with orange lines and agent 2's posterior beliefs are depicted with blue lines.
1566+
These plots show the evolution of beliefs for each model (f, g, h) separately.
1567+
1568+
Agent 1's posterior probabilities are depicted in blue and agent 2's posterior beliefs are depicted in orange.
15591569

1560-
The top panel shows outcomes when nature draws from $f$.
1570+
The top panel shows outcomes when nature draws from $f$.
1571+
1572+
Evidently, when nature draws from $f$, agent 1 learns faster than agent 2, who, unlike agent 1, attaches a positive prior probability to model $h$:
1573+
1574+
- In the leftmost panel, both agents' beliefs for $\pi(f)$ converge toward 1 (the truth)
1575+
- Agent 1 learns faster than agent 2, who initially assigns probability to model $h$
1576+
- Agent 2's belief in model $h$ (rightmost panel) gradually converges to 0
15611577

1562-
Evidently, when nature draws from $f$, agent 1 learns faster than agent 2, who, unlike agent 1, attaches a positive prior probability to model $h$.
15631578

15641579
The bottom panel depicts outcomes when nature draws from $g$.
15651580

1566-
Again, agent 1 learns faster than agent 2, who, unlike agent 1, attaches some prior probability to model $h$.
1581+
Again, agent 1 learns faster than agent 2, who, unlike agent 1, attaches some prior probability to model $h$:
1582+
1583+
- In the middle panel, both agents' beliefs for $\pi(g)$ converge toward 1 (the truth)
1584+
- Again, agent 1 learns faster due to not considering model $h$ initially
1585+
- Agent 2's belief in model $h$ converges to 0 over time
15671586

1568-
* In both panels, agent 2's posterior probability attached to $h$ (dotted line) converges to 0.
1587+
In both panels, agent 2's posterior probability attached to $h$ (dotted line) converges to 0.
15691588

1570-
Notice that when nature uses model $f$, the consumption share of agent 1 is only temporarily bigger than 1, when when nature uses model $g$, agent 1's consumption share is permanently higher.
15711589

1590+
Note the difference in the convergence speed when nature draws from $f$ and $g$.
1591+
1592+
The time it takes for agent 2 to "catch up" is longer when nature draws from $g$.
1593+
1594+
Agent 1 converges faster because it only needs to update beliefs between two models ($f$ and $g$), while agent 2 must also rule out model $h$.
1595+
1596+
1597+
Before reading the next figure, please guess how consumption shares evolve.
1598+
1599+
Remember that agent 1 reaches the correct model faster than agent 2.
1600+
1601+
```{code-cell} ipython3
1602+
plot_consumption_dynamics(c1_data, nature_labels, λ=0.5, figsize=(12, 6))
1603+
plt.show()
1604+
```
1605+
1606+
This plot shows the consumption share dynamics. Notice that when nature uses model $f$, the consumption share of agent 1 is only temporarily higher than 0.5, while when nature uses model $g$, agent 1's consumption share is permanently higher.
15721607

15731608
In this exercise, the "truth" is among possible outcomes according to both agents.
15741609

@@ -1599,9 +1634,6 @@ Please simulate and visualize evolutions of posterior probabilities and consum
15991634
16001635
* Nature permanently draws from $f$
16011636
* Nature permanently draws from $g$
1602-
1603-
1604-
16051637
```
16061638

16071639
```{solution-start} lr_ex7
@@ -1674,25 +1706,28 @@ c1_data = [results_f[0], results_g[0]]
16741706
π_data = [results_f[1:], results_g[1:]]
16751707
nature_labels = ['f', 'g']
16761708
1677-
fig, axes = plot_three_model_results(c1_data, π_data, nature_labels, λ)
1678-
plt.show()
1709+
plot_belief_evolution(π_data, nature_labels, figsize=(15, 5*len(nature_labels)))
1710+
plt.plot();
16791711
```
16801712

1681-
In the top panel, which depicts outcomes when nature draws from $f$, please observe how slowly agent 1 learns the truth.
1713+
When nature draws from $f$ (top row), observe how slowly agent 1 learns the truth in the leftmost panel showing $\pi(f)$.
16821714

1683-
The posterior probability that agent 2 puts on $h$ converges to zero slowly.
1715+
The posterior probability that agent 1 puts on $h$ (rightmost panel) converges to zero slowly.
16841716

1717+
This is because we have specified that $f$ is very difficult to distinguish from $h$ as measured by $KL(f, h)$.
16851718

1686-
This is because we have specified that $f$ is very difficult to distinguish from $h$ as measured by $KL(f, h)$.
1719+
When it comes to agent 2, the belief remains stationary at 0 and does not converge to the true model because of its rigidity regarding $h$, and $f$ is very difficult to distinguish from $h$.
16871720

1688-
The bottom panel shows outcomes when nature draws from $g$.
1721+
When nature draws from $g$ (bottom row), we have specified things so that $g$ is further away from $h$ as measured by the KL divergence.
16891722

1690-
We have specified things so that $g$ is further away from $h$ as measured by the KL divergence.
1723+
This helps both agents learn the truth more quickly, as seen in the middle panel showing $\pi(g)$.
16911724

1692-
This helps agent 2 learn the truth more quickly.
1725+
```{code-cell} ipython3
1726+
plot_consumption_dynamics(c1_data, nature_labels, λ=0.5, figsize=(12, 6))
1727+
plt.show()
1728+
```
16931729

1694-
Notice that agent 1's consumption share converges to 1 both when nature permanently draws from $f$
1695-
and when nature permanently draws from $g$.
1730+
In the consumption dynamics plot, notice that agent 1's consumption share converges to 1 both when nature permanently draws from $f$ and when nature permanently draws from $g$.
16961731

16971732
```{solution-end}
16981733
```

0 commit comments

Comments
 (0)