Skip to content

Commit 122a708

Browse files
Tom's edits of Blume-Easley lecture and bib file
1 parent 43a0390 commit 122a708

File tree

2 files changed

+67
-39
lines changed

2 files changed

+67
-39
lines changed

lectures/_static/quant-econ.bib

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,16 @@
33
Note: Extended Information (like abstracts, doi, url's etc.) can be found in quant-econ-extendedinfo.bib file in _static/
44
###
55
6+
@article{blume2018case,
7+
title={A case for incomplete markets},
8+
author={Blume, Lawrence E and Cogley, Timothy and Easley, David A and Sargent, Thomas J and Tsyrennikov, Viktor},
9+
journal={Journal of Economic Theory},
10+
volume={178},
11+
pages={191--221},
12+
year={2018},
13+
publisher={Elsevier}
14+
}
15+
616
@article{shannon1948mathematical,
717
title={A mathematical theory of communication},
818
author={Shannon, Claude E},

lectures/likelihood_ratio_process_2.md

Lines changed: 57 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ We'll study two alternative arrangements:
5252
The fundamental theorems of welfare economics will apply and assure us that these two arrangements end up producing exactly the same allocation of consumption goods to individuals **provided** that the social planner assigns an appropriate set of **Pareto weights**.
5353

5454
```{note}
55-
You can learn about how the two welfare theorems are applied in modern macroeconomic models in {doc}`this lecture on a planning problem <cass_koopmans_1>` and {doc}`this lecture on a related competitive equilibrium <cass_koopmans_2>`.
55+
You can learn about how the two welfare theorems are applied in modern macroeconomic models in {doc}`this lecture on a planning problem <cass_koopmans_1>` and {doc}`this lecture on a related competitive equilibrium <cass_koopmans_2>`. {doc}`This quantecon lecture <ge_arrow>` presents a recursive formulation of complete markets models with homogeneous beliefs.
5656
```
5757

5858

@@ -830,13 +830,22 @@ This ties in nicely with {eq}`eq:kl_likelihood_link`.
830830

831831
## Related Lectures
832832

833-
Likelihood processes play an important role in Bayesian learning, as described in {doc}`likelihood_bayes`
834-
and as applied in {doc}`odu`.
833+
Complete markets models with homogeneous beliefs, a kind often used in macroeconomics and finance, are studied in this quantecon lecture {doc}`ge_arrow`.
834+
835+
{cite}`blume2018case` discuss a paternalistic case against complete markets. Their analysis assumes that a social planner should disregard individuals preferences in the sense that it should disregard the subjective belief components of their preferences.
836+
837+
Likelihood processes play an important role in Bayesian learning, as described in {doc}`likelihood_bayes` and as applied in {doc}`odu`.
835838

836839
Likelihood ratio processes appear again in {doc}`advanced:additive_functionals`.
837840

838841

839-
## Exercise
842+
843+
{doc}`ge_arrow`
844+
845+
846+
847+
848+
## Exercises
840849

841850
```{exercise}
842851
:label: lr_ex3
@@ -892,7 +901,7 @@ $$
892901
c_t^1(s^t) = \frac{\lambda l_t(s^t)}{1 - \lambda + \lambda l_t(s^t)}
893902
$$
894903

895-
To match them, we need the following equality to hold
904+
To match agent 1's choice in a competitive equilibrium with the planner's choice for agent 1, the following equality must hold
896905

897906
$$
898907
\frac{\mu_2}{\mu_1} = \frac{\lambda}{1 - \lambda}
@@ -932,9 +941,12 @@ $$
932941
```{exercise}
933942
:label: lr_ex4
934943
935-
In this exercise, we will implement the Blume-Easley model with learning agents.
944+
In this exercise, we'll study two agents, each of whom updates its posterior probability as
945+
data arrive.
946+
947+
* each agent applies Bayes' law in the way studied in {doc}`likelihood_bayes`.
936948
937-
Consider the two models
949+
The following two models are on the table
938950
939951
$$
940952
f(s^t) = f(s_1) f(s_2) \cdots f(s_t)
@@ -943,29 +955,27 @@ $$
943955
and
944956
945957
$$
946-
g(s^t) = g(s_1) g(s_2) \cdots g(s_t)
958+
g(s^t) = g(s_1) g(s_2) \cdots g(s_t)
947959
$$
948960
949-
and associated likelihood ratio process
961+
as is an associated likelihood ratio process
950962
951963
$$
952-
L(s^t) = \frac{f(s^t)}{g(s^t)}
964+
L(s^t) = \frac{f(s^t)}{g(s^t)} .
953965
$$
954966
955967
Let $\pi_0 \in (0,1)$ be a prior probability and
956968
957969
$$
958-
\pi_t = \frac{ \pi_0 L(s^t)}{ \pi_0 L(s^t) + (1-\pi_0) }
970+
\pi_t = \frac{ \pi_0 L(s^t)}{ \pi_0 L(s^t) + (1-\pi_0) } .
959971
$$
960972
961-
and the mixture model
973+
Each of our two agents deploys its own version of the mixture model
962974
963975
$$
964976
m(s^t) = \pi_t f(s^t) + (1- \pi_t) g(s^t)
965977
$$ (eq:be_mix_model)
966978
967-
Now consider them in the environment in our Blume-Easley lecture.
968-
969979
We'll endow each type of consumer with model {eq}`eq:be_mix_model`.
970980
971981
* The two agents share the same $f$ and $g$, but
@@ -977,29 +987,35 @@ $$
977987
m^i(s^t) = \pi^i_t f(s^t) + (1- \pi^i_t) g(s^t)
978988
$$ (eq:prob_model)
979989
980-
The idea is to hand probability models {eq}`eq:prob_model` for $i=1,2$ to the social planner in the Blume-Easley lecture, deduce allocation $c^i(s^t), i = 1,2$, and watch what happens when
990+
We now hand probability models {eq}`eq:prob_model` for $i=1,2$ to the social planner.
991+
992+
We want to deduce allocation $c^i(s^t), i = 1,2$, and watch what happens when
981993
982994
* nature's model is $f$
983995
* nature's model is $g$
984996
985-
Both consumers will eventually learn the "truth", but one of them will learn faster.
997+
We expect that consumers will eventually learn the "truth", but that one of them will learn faster.
998+
999+
To explore things, please set $f \sim \text{Beta}(1.5, 1)$ and $g \sim \text{Beta}(1, 1.5)$.
1000+
1001+
Please write Python code that answers the following questions.
1002+
1003+
* How do consumption shares evolve?
1004+
* Which agent learns faster when nature follows $f$?
1005+
* Which agent learns faster when nature follows $g$?
1006+
* How does a difference in initial priors $\pi_0^1$ and $\pi_0^2$ affect the convergence speed?
9861007
987-
Questions:
988-
1. How do their consumption shares evolve?
989-
2. Which agent learns faster when nature follows $f$? When nature follows $g$?
990-
3. How does the difference in initial priors $\pi_0^1$ and $\pi_0^2$ affect the convergence speed?
9911008
992-
In the exercise below, set $f \sim \text{Beta}(1.5, 1)$ and $g \sim \text{Beta}(1, 1.5)$.
9931009
9941010
```
9951011

9961012
```{solution-start} lr_ex4
9971013
:class: dropdown
9981014
```
9991015

1000-
Here is one solution.
10011016

1002-
First, let's set up the model with learning agents:
1017+
1018+
First, let's write helper functions that compute model components including each agent's subjective belief function.
10031019

10041020
```{code-cell} ipython3
10051021
def bayesian_update(π_0, L_t):
@@ -1017,7 +1033,7 @@ def mixture_density_belief(s_seq, f_func, g_func, π_seq):
10171033
return π_seq * f_vals + (1 - π_seq) * g_vals
10181034
```
10191035

1020-
Now let's implement the learning Blume-Easley simulation:
1036+
Now let's write code that simulates the Blume-Easley model with our two agents.
10211037

10221038
```{code-cell} ipython3
10231039
def simulate_learning_blume_easley(sequences, f_belief, g_belief,
@@ -1096,7 +1112,7 @@ f = jit(lambda x: p(x, F_a, F_b))
10961112
g = jit(lambda x: p(x, G_a, G_b))
10971113
```
10981114

1099-
We start with different initial priors $\pi^i_0 \in (0, 1)$ and widen the gap between them.
1115+
We'll start with different initial priors $\pi^i_0 \in (0, 1)$ and widen the gap between them.
11001116

11011117
```{code-cell} ipython3
11021118
# Different initial priors
@@ -1128,7 +1144,7 @@ for i, (π_0_1, π_0_2) in enumerate(π_0_scenarios):
11281144
s_seq_g, f, g, π_0_1, π_0_2, λ)
11291145
```
11301146

1131-
Now let's visualize the results
1147+
Let's visualize the results
11321148

11331149
```{code-cell} ipython3
11341150
def plot_learning_results(results, π_0_scenarios, nature_type, truth_value):
@@ -1180,44 +1196,44 @@ def plot_learning_results(results, π_0_scenarios, nature_type, truth_value):
11801196
return fig, axes
11811197
```
11821198

1183-
Now use the function to plot results when nature follows f:
1199+
Now we'll plot outcome when nature follows f:
11841200

11851201
```{code-cell} ipython3
11861202
fig_f, axes_f = plot_learning_results(
11871203
results_f, π_0_scenarios, 'f', 1.0)
11881204
plt.show()
11891205
```
11901206

1191-
We can see that the agent with more "accurate" belief gets higher consumption share.
1207+
We can see that the agent with the more accurate belief gets higher consumption share.
11921208

1193-
Moreover, the further the initial beliefs are, the longer it takes for the consumption ratio to converge.
1209+
Moreover, the further apart are initial beliefs, the longer it takes for the consumption ratio to converge.
11941210

1195-
The time it takes for the "less accurate" agent costs their share in future consumption.
1211+
The longer it takes for the "less accurate" agent to learn, the lower its ultimate consumption share.
11961212

1197-
Now plot results when nature follows g:
1213+
Now let's plot outcomes when nature follows g:
11981214

11991215
```{code-cell} ipython3
12001216
fig_g, axes_g = plot_learning_results(results_g, π_0_scenarios, 'g', 0.0)
12011217
plt.show()
12021218
```
12031219

1204-
We observe a similar but symmetrical pattern.
1220+
We observe symmetrical outcomes.
12051221

12061222
```{solution-end}
12071223
```
12081224

12091225
```{exercise}
12101226
:label: lr_ex5
12111227
1212-
In the previous exercise, we specifically set the two beta distributions to be relatively close to each other.
1228+
In the previous exercise, we purposefully set the two beta distributions to be relatively close to each other.
12131229
1214-
That is to say, it is harder to distinguish between the two distributions.
1230+
That made it challenging to distinguish the distributions.
12151231
1216-
Now let's explore an alternative scenario where the two distributions are further apart.
1232+
Now let's study outcomes when the distributions are further apart.
12171233
1218-
Specifically, we set $f \sim \text{Beta}(2, 5)$ and $g \sim \text{Beta}(5, 2)$.
1234+
Let's set $f \sim \text{Beta}(2, 5)$ and $g \sim \text{Beta}(5, 2)$.
12191235
1220-
Try to compare the learning dynamics in this scenario with the previous one using the simulation code we developed earlier.
1236+
Please use the Python code you have written to study outcomes.
12211237
```
12221238

12231239
```{solution-start} lr_ex5
@@ -1269,10 +1285,12 @@ plt.show()
12691285
fig_g, axes_g = plot_learning_results(results_g, π_0_scenarios, 'g', 0.0)
12701286
plt.show()
12711287
```
1288+
Evidently, because the two distributions are further apart, it is easier to distinguish them.
1289+
1290+
So learning occurs more quickly.
12721291

1273-
In this case, it is easier to realize one's belief is incorrect; the belief adjusts more quickly.
12741292

1275-
Observe that consumption shares also adjust more quickly.
1293+
So do consumption shares.
12761294

12771295
```{solution-end}
12781296
```

0 commit comments

Comments
 (0)