Skip to content

Commit 623db7c

Browse files
Tom's second Aug 21 edits of the BE lecture
1 parent a5ada58 commit 623db7c

File tree

1 file changed

+41
-19
lines changed

1 file changed

+41
-19
lines changed

lectures/likelihood_ratio_process_2.md

Lines changed: 41 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1331,11 +1331,12 @@ $$
13311331
$$
13321332
13331333
1334-
Simulate and visualize the evolution of consumption allocations when:
1334+
Please simulate and visualize evolutions of posterior probabilities and consumption allocations when:
1335+
13351336
* Nature permanently draws from $f$
13361337
* Nature permanently draws from $g$
13371338
1338-
Use the existing code structure to implement this simulation and observe how the allocation evolves over time.
1339+
13391340
13401341
```
13411342

@@ -1403,7 +1404,7 @@ def compute_posterior_three_models(
14031404
return π_f, π_g, π_h
14041405
```
14051406

1406-
Let's also write the simulation code following the same idea as in the previous exercises
1407+
Let's also write simulation code along lines similar to earlier exercises
14071408

14081409
```{code-cell} ipython3
14091410
@jit(parallel=True)
@@ -1529,7 +1530,7 @@ def plot_three_model_results(c1_data, π_data, nature_labels, λ=0.5,
15291530

15301531
Now let's run the simulation.
15311532

1532-
In our simulation, agent 1 assigns positive probabilities only to $f$ and $g$, while agent 2 puts equal weights on all three models
1533+
In the simulation below, agent 1 assigns positive probabilities only to $f$ and $g$, while agent 2 puts equal weights on all three models.
15331534

15341535
```{code-cell} ipython3
15351536
T = 100
@@ -1554,17 +1555,30 @@ fig, axes = plot_three_model_results(c1_data, π_data, nature_labels, λ)
15541555
plt.show()
15551556
```
15561557

1557-
The results show interesting dynamics.
1558+
Agent 1's posterior probabilities are depicted with orange lines and agent 2's posterior beliefs are depicted with blue lines.
1559+
1560+
The top panel shows outcomes when nature draws from $f$.
1561+
1562+
Evidently, when nature draws from $f$, agent 1 learns faster than agent 2, who, unlike agent 1, attaches a positive prior probability to model $h$.
1563+
1564+
The bottom panel depicts outcomes when nature draws from $g$.
1565+
1566+
Again, agent 1 learns faster than agent 2, who, unlike agent 1, attaches some prior probability to model $h$.
1567+
1568+
* In both panels, agent 2's posterior probability attached to $h$ (dotted line) converges to 0.
1569+
1570+
Notice that when nature uses model $f$, the consumption share of agent 1 is only temporarily bigger than 1, when when nature uses model $g$, agent 1's consumption share is permanently higher.
15581571

1559-
In the top panel, Agent 1 (orange line) who initially puts weight only on $f$ (solid line) and $g$ (dashed line) eventually dominates consumption as they learn the truth faster than Agent 2 who spreads probability across all three models.
15601572

1561-
When nature draws from $g$ (lower panel), we see a similar pattern but reversed -- Agent 1's consumption share decreases as their belief converges to the truth.
1573+
In this exercise, the "truth" is among possible outcomes according to both agents.
15621574

1563-
For both cases, the belief on $h$ (dotted line) eventually goes to 0.
1575+
Agent 2's model is "more general" because it allows a possibility -- that nature is drawing from $h$ -- that agent 1's model does not include.
15641576

1565-
The agent with the simpler (but correct) model structure learns faster and eventually dominates consumption allocation.
1577+
Agent 1 learns more quickly because he uses a simpler model.
15661578

1567-
In other words, the model penalizes complexity and rewards accuracy.
1579+
It would be interesting to explore why agent 1's consumption allocation when $f$ generates the data is only temporarily higher than agent 2's, while when $g$ generates the data, it is permanently higher.
1580+
1581+
* Hint: Somehow the KL divergence should be able to help us sort this out.
15681582

15691583
```{solution-end}
15701584
```
@@ -1581,23 +1595,23 @@ Consider the same setup as the previous exercise, but now:
15811595
Choose $h$ to be close but not equal to either $f$ or $g$ as measured by KL divergence.
15821596
For example, set $h \sim \text{Beta}(1.2, 1.1)$.
15831597
1584-
Simulate and visualize the evolution of consumption allocations when:
1598+
Please simulate and visualize evolutions of posterior probabilities and consumption allocations when:
1599+
15851600
* Nature permanently draws from $f$
15861601
* Nature permanently draws from $g$
15871602
1588-
Observe how the presence of extreme priors affects learning and allocation dynamics.
1603+
15891604
15901605
```
15911606

15921607
```{solution-start} lr_ex7
15931608
:class: dropdown
15941609
```
15951610

1596-
Let's implement this case with extreme priors where one agent is almost dogmatic.
15971611

1598-
For this to converge, we need a longer sequence by increasing $T$ to 1000.
1612+
To explore this exercise, we increase $T$ to 1000.
15991613

1600-
Let's define the parameters for distributions and verify that $h$ and $f$ are closer than $h$ and $g$
1614+
Let's specify $f, g$, and $h$ and verify that $h$ and $f$ are closer than $h$ and $g$
16011615

16021616
```{code-cell} ipython3
16031617
F_a, F_b = 1, 1
@@ -1664,13 +1678,21 @@ fig, axes = plot_three_model_results(c1_data, π_data, nature_labels, λ)
16641678
plt.show()
16651679
```
16661680

1667-
In the top panel, observe how slowly agent 1 is adjusting to the truth -- the belief is rigid but still updating.
1681+
In the top panel, which depicts outcomes when nature draws from $f$, please observe how slowly agent 1 learns the truth.
1682+
1683+
The posterior probability that agent 2 puts on $h$ converges to zero slowly.
1684+
1685+
1686+
This is because we have specified that $f$ is very difficult to distinguish from $h$ as measured by $KL(f, h)$.
1687+
1688+
The bottom panel shows outcomes when nature draws from $g$.
16681689

1669-
The belief about $h$ slowly shifts towards 0 crossing the belief about $f$ moving up to 1 at $t = 500$.
1690+
We have specified things so that $g$ is further away from $h$ as measured by the KL divergence.
16701691

1671-
However, since agent 2 is rigid about $h$, and $f$ is very difficult to distinguish from $h$ as measured by $KL(f, h)$, we can see that the belief is almost stationary due to the difficulty of realizing the belief is incorrect.
1692+
This helps agent 2 learn the truth more quickly.
16721693

1673-
In the bottom panel, since $g$ is further away from $h$, both agents adjust toward the truth very quickly, but agent 1 acts faster given the slightly higher weight on $f$ and $g$.
1694+
Notice that agent 1's consumption share converges to 1 both when nature permanently draws from $f$
1695+
and when nature permanently draws from $g$.
16741696

16751697
```{solution-end}
16761698
```

0 commit comments

Comments
 (0)