You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/likelihood_ratio_process_2.md
+57-39Lines changed: 57 additions & 39 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -52,7 +52,7 @@ We'll study two alternative arrangements:
52
52
The fundamental theorems of welfare economics will apply and assure us that these two arrangements end up producing exactly the same allocation of consumption goods to individuals **provided** that the social planner assigns an appropriate set of **Pareto weights**.
53
53
54
54
```{note}
55
-
You can learn about how the two welfare theorems are applied in modern macroeconomic models in {doc}`this lecture on a planning problem <cass_koopmans_1>` and {doc}`this lecture on a related competitive equilibrium <cass_koopmans_2>`.
55
+
You can learn about how the two welfare theorems are applied in modern macroeconomic models in {doc}`this lecture on a planning problem <cass_koopmans_1>` and {doc}`this lecture on a related competitive equilibrium <cass_koopmans_2>`. {doc}`This quantecon lecture <ge_arrow>` presents a recursive formulation of complete markets models with homogeneous beliefs.
56
56
```
57
57
58
58
@@ -830,13 +830,22 @@ This ties in nicely with {eq}`eq:kl_likelihood_link`.
830
830
831
831
## Related Lectures
832
832
833
-
Likelihood processes play an important role in Bayesian learning, as described in {doc}`likelihood_bayes`
834
-
and as applied in {doc}`odu`.
833
+
Complete markets models with homogeneous beliefs, a kind often used in macroeconomics and finance, are studied in this quantecon lecture {doc}`ge_arrow`.
834
+
835
+
{cite}`blume2018case` discuss a paternalistic case against complete markets. Their analysis assumes that a social planner should disregard individuals preferences in the sense that it should disregard the subjective belief components of their preferences.
836
+
837
+
Likelihood processes play an important role in Bayesian learning, as described in {doc}`likelihood_bayes` and as applied in {doc}`odu`.
835
838
836
839
Likelihood ratio processes appear again in {doc}`advanced:additive_functionals`.
Each of our two agents deploys its own version of the mixture model
962
974
963
975
$$
964
976
m(s^t) = \pi_t f(s^t) + (1- \pi_t) g(s^t)
965
977
$$ (eq:be_mix_model)
966
978
967
-
Now consider them in the environment in our Blume-Easley lecture.
968
-
969
979
We'll endow each type of consumer with model {eq}`eq:be_mix_model`.
970
980
971
981
* The two agents share the same $f$ and $g$, but
@@ -977,29 +987,35 @@ $$
977
987
m^i(s^t) = \pi^i_t f(s^t) + (1- \pi^i_t) g(s^t)
978
988
$$ (eq:prob_model)
979
989
980
-
The idea is to hand probability models {eq}`eq:prob_model` for $i=1,2$ to the social planner in the Blume-Easley lecture, deduce allocation $c^i(s^t), i = 1,2$, and watch what happens when
990
+
We now hand probability models {eq}`eq:prob_model` for $i=1,2$ to the social planner.
991
+
992
+
We want to deduce allocation $c^i(s^t), i = 1,2$, and watch what happens when
981
993
982
994
* nature's model is $f$
983
995
* nature's model is $g$
984
996
985
-
Both consumers will eventually learn the "truth", but one of them will learn faster.
997
+
We expect that consumers will eventually learn the "truth", but that one of them will learn faster.
998
+
999
+
To explore things, please set $f \sim \text{Beta}(1.5, 1)$ and $g \sim \text{Beta}(1, 1.5)$.
1000
+
1001
+
Please write Python code that answers the following questions.
1002
+
1003
+
* How do consumption shares evolve?
1004
+
* Which agent learns faster when nature follows $f$?
1005
+
* Which agent learns faster when nature follows $g$?
1006
+
* How does a difference in initial priors $\pi_0^1$ and $\pi_0^2$ affect the convergence speed?
986
1007
987
-
Questions:
988
-
1. How do their consumption shares evolve?
989
-
2. Which agent learns faster when nature follows $f$? When nature follows $g$?
990
-
3. How does the difference in initial priors $\pi_0^1$ and $\pi_0^2$ affect the convergence speed?
991
1008
992
-
In the exercise below, set $f \sim \text{Beta}(1.5, 1)$ and $g \sim \text{Beta}(1, 1.5)$.
993
1009
994
1010
```
995
1011
996
1012
```{solution-start} lr_ex4
997
1013
:class: dropdown
998
1014
```
999
1015
1000
-
Here is one solution.
1001
1016
1002
-
First, let's set up the model with learning agents:
1017
+
1018
+
First, let's write helper functions that compute model components including each agent's subjective belief function.
0 commit comments