Skip to content

Commit f4cb3f7

Browse files
committed
update exercises
1 parent a8335e6 commit f4cb3f7

File tree

1 file changed

+349
-1
lines changed

1 file changed

+349
-1
lines changed

lectures/likelihood_ratio_process_2.md

Lines changed: 349 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ jupytext:
44
extension: .md
55
format_name: myst
66
format_version: 0.13
7-
jupytext_version: 1.17.1
7+
jupytext_version: 1.17.2
88
kernelspec:
99
display_name: Python 3 (ipykernel)
1010
language: python
@@ -928,3 +928,351 @@ $$
928928

929929
```{solution-end}
930930
```
931+
932+
```{exercise}
933+
:label: lr_ex4
934+
935+
In this exercise, we will implement the Blume-Easley model with learning agents.
936+
937+
Consider the two models
938+
939+
$$
940+
f(s^t) = f(s_1) f(s_2) \cdots f(s_t)
941+
$$
942+
943+
and
944+
945+
$$
946+
g(s^t) = g(s_1) g(s_2) \cdots g(s_t)
947+
$$
948+
949+
and associated likelihood ratio process
950+
951+
$$
952+
L(s^t) = \frac{f(s^t)}{g(s^t)}
953+
$$
954+
955+
Let $\pi_0 \in (0,1)$ be a prior probability and
956+
957+
$$
958+
\pi_t = \frac{ \pi_0 L(s^t)}{ \pi_0 L(s^t) + (1-\pi_0) }
959+
$$
960+
961+
Now consider the mixture model
962+
963+
$$
964+
m(s^t) = \pi(s^t) f(s^t) + (1- \pi(s^t)) g(s^t)
965+
$$ (eq:be_mix_model)
966+
967+
Now consider the environment in our Blume-Easley lecture.
968+
969+
We'll endow each type of consumer with model {eq}`eq:be_mix_model`.
970+
971+
* The two agents share the same $f$ and $g$, but
972+
* they have different initial priors, say $\pi_0^1$ and $\pi_0^2$
973+
974+
Thus, consumer $i$'s probability model is
975+
976+
$$
977+
m^i(s^t) = \pi^i(s^t) f(s^t) + (1- \pi^i(s^t)) g(s^t) \tag{4}
978+
$$
979+
980+
The idea is to hand probability models (4) for $i=1,2$ to the social planner in the Blume-Easley lecture, deduce allocation $c^i(s^t), i = 1,2$, and watch what happens when
981+
982+
* nature's model is $f$
983+
* nature's model is $g$
984+
985+
Both consumers will eventually learn the "truth", but one of them will learn faster.
986+
987+
Questions:
988+
1. How do their consumption shares evolve?
989+
2. Which agent learns faster when nature follows $f$? When nature follows $g$?
990+
3. How does the difference in initial priors $\pi_0^1$ and $\pi_0^2$ affect the convergence speed?
991+
992+
In the exercise below, set $f \sim \text{Beta}(1.5, 1)$ and $g \sim \text{Beta}(1, 1.5)$.
993+
994+
```
995+
996+
```{solution-start} lr_ex4
997+
:class: dropdown
998+
```
999+
1000+
Here is one solution.
1001+
1002+
First, let's set up the model with learning agents:
1003+
1004+
```{code-cell} ipython3
1005+
def bayesian_update(π_0, L_t):
1006+
"""
1007+
Bayesian update of belief probability given likelihood ratio.
1008+
"""
1009+
return (π_0 * L_t) / (π_0 * L_t + (1 - π_0))
1010+
1011+
def mixture_density_belief(s_seq, f_func, g_func, π_seq):
1012+
"""
1013+
Compute the mixture density beliefs m^i(s^t) for agent i.
1014+
"""
1015+
f_vals = f_func(s_seq)
1016+
g_vals = g_func(s_seq)
1017+
return π_seq * f_vals + (1 - π_seq) * g_vals
1018+
```
1019+
1020+
Now let's implement the learning Blume-Easley simulation:
1021+
1022+
```{code-cell} ipython3
1023+
def simulate_learning_blume_easley(sequences, f_belief, g_belief,
1024+
π_0_1, π_0_2, λ=0.5):
1025+
"""
1026+
Simulate Blume-Easley model with learning agents.
1027+
"""
1028+
N, T = sequences.shape
1029+
1030+
# Initialize arrays to store results
1031+
π_1_seq = np.empty((N, T))
1032+
π_2_seq = np.empty((N, T))
1033+
c1_share = np.empty((N, T))
1034+
l_agents_seq = np.empty((N, T))
1035+
1036+
π_1_seq[:, 0] = π_0_1
1037+
π_2_seq[:, 0] = π_0_2
1038+
1039+
for n in range(N):
1040+
# Initialize cumulative likelihood ratio for beliefs
1041+
L_cumul = 1.0
1042+
1043+
# Initialize likelihood ratio between agent densities
1044+
l_agents_cumul = 1.0
1045+
1046+
for t in range(1, T):
1047+
s_t = sequences[n, t]
1048+
1049+
# Compute likelihood ratio for this observation
1050+
l_t = f_belief(s_t) / g_belief(s_t)
1051+
1052+
# Update cumulative likelihood ratio
1053+
L_cumul *= l_t
1054+
1055+
# Bayesian update of beliefs
1056+
π_1_t = bayesian_update(π_0_1, L_cumul)
1057+
π_2_t = bayesian_update(π_0_2, L_cumul)
1058+
1059+
# Store beliefs
1060+
π_1_seq[n, t] = π_1_t
1061+
π_2_seq[n, t] = π_2_t
1062+
1063+
# Compute mixture densities for each agent
1064+
m1_t = π_1_t * f_belief(s_t) + (1 - π_1_t) * g_belief(s_t)
1065+
m2_t = π_2_t * f_belief(s_t) + (1 - π_2_t) * g_belief(s_t)
1066+
1067+
# Update cumulative likelihood ratio between agents
1068+
l_agents_cumul *= (m1_t / m2_t)
1069+
l_agents_seq[n, t] = l_agents_cumul
1070+
1071+
# c_t^1(s^t) = λ * l_t(s^t) / (1 - λ + λ * l_t(s^t))
1072+
# where l_t(s^t) is the cumulative likelihood ratio between agents
1073+
c1_share[n, t] = λ * l_agents_cumul / (1 - λ + λ * l_agents_cumul)
1074+
1075+
return {
1076+
'π_1': π_1_seq,
1077+
'π_2': π_2_seq,
1078+
'c1_share': c1_share,
1079+
'l_agents': l_agents_seq
1080+
}
1081+
```
1082+
1083+
Let's run simulations for different scenarios.
1084+
1085+
We use $\lambda = 0.5$, $T=40$, and $N=1000$.
1086+
1087+
```{code-cell} ipython3
1088+
λ = 0.5
1089+
T = 40
1090+
N = 1000
1091+
1092+
F_a, F_b = 1.5, 1
1093+
G_a, G_b = 1, 1.5
1094+
1095+
f = jit(lambda x: p(x, F_a, F_b))
1096+
g = jit(lambda x: p(x, G_a, G_b))
1097+
```
1098+
1099+
We start the $\pi^i_0 \in (0, 1)$ from different starting points and widen the gap
1100+
1101+
```{code-cell} ipython3
1102+
# Different initial priors
1103+
π_0_scenarios = [
1104+
(0.3, 0.7),
1105+
(0.7, 0.3),
1106+
(0.1, 0.9),
1107+
]
1108+
```
1109+
1110+
Now we can run simulations for different scenarios
1111+
1112+
```{code-cell} ipython3
1113+
# Nature follows f
1114+
s_seq_f = np.random.beta(F_a, F_b, (N, T))
1115+
1116+
# Nature follows g
1117+
s_seq_g = np.random.beta(G_a, G_b, (N, T))
1118+
1119+
results_f = {}
1120+
results_g = {}
1121+
1122+
for i, (π_0_1, π_0_2) in enumerate(π_0_scenarios):
1123+
# When nature follows f
1124+
results_f[i] = simulate_learning_blume_easley(
1125+
s_seq_f, f, g, π_0_1, π_0_2, λ)
1126+
# When nature follows g
1127+
results_g[i] = simulate_learning_blume_easley(
1128+
s_seq_g, f, g, π_0_1, π_0_2, λ)
1129+
```
1130+
1131+
Now let's visualize the results
1132+
1133+
```{code-cell} ipython3
1134+
def plot_learning_results(results, π_0_scenarios, nature_type, truth_value):
1135+
"""
1136+
Plot beliefs and consumption shares for learning agents.
1137+
"""
1138+
1139+
fig, axes = plt.subplots(3, 2, figsize=(10, 15))
1140+
1141+
scenario_labels = [
1142+
rf'$\pi_0^1 = {π_0_1}, \pi_0^2 = {π_0_2}$'
1143+
for π_0_1, π_0_2 in π_0_scenarios
1144+
]
1145+
1146+
for row, (scenario_idx, scenario_label) in enumerate(
1147+
zip(range(3), scenario_labels)):
1148+
1149+
res = results[scenario_idx]
1150+
1151+
# Plot beliefs
1152+
ax = axes[row, 0]
1153+
π_1_med = np.median(res['π_1'], axis=0)
1154+
π_2_med = np.median(res['π_2'], axis=0)
1155+
ax.plot(π_1_med, 'C0', label=r'$\pi_1^t$ (agent 1)', linewidth=2)
1156+
ax.plot(π_2_med, 'C1', label=r'$\pi_2^t$ (agent 2)', linewidth=2)
1157+
ax.axhline(y=truth_value, color='gray', linestyle='--',
1158+
alpha=0.5, label=f'truth ({nature_type})')
1159+
ax.set_title(f'beliefs when nature = {nature_type}\n{scenario_label}')
1160+
ax.set_ylabel('belief probability')
1161+
ax.set_ylim([-0.05, 1.05])
1162+
ax.legend()
1163+
1164+
# Plot consumption shares
1165+
ax = axes[row, 1]
1166+
c1_med = np.median(res['c1_share'], axis=0)
1167+
ax.plot(c1_med, 'g-', linewidth=2, label='agent 1 consumption share')
1168+
ax.axhline(y=0.5, color='gray', linestyle='--',
1169+
alpha=0.5, label='equal split')
1170+
ax.set_title(f'consumption when nature = {nature_type}')
1171+
ax.set_ylabel('agent 1 share')
1172+
ax.set_ylim([0, 1])
1173+
ax.legend()
1174+
1175+
# Add x-labels
1176+
for col in range(2):
1177+
axes[row, col].set_xlabel('time')
1178+
1179+
plt.tight_layout()
1180+
return fig, axes
1181+
```
1182+
1183+
Now use the function to plot results when nature follows f:
1184+
1185+
```{code-cell} ipython3
1186+
fig_f, axes_f = plot_learning_results(
1187+
results_f, π_0_scenarios, 'f', 1.0)
1188+
plt.show()
1189+
```
1190+
1191+
We can see that the agent with more "accurate" belief gets higher consumption share.
1192+
1193+
Moreover, the further the initial beliefs are, the longer it takes for the consumption ratio to converge.
1194+
1195+
The time it takes for the "less accurate" agent costs their share in future consumption.
1196+
1197+
Now plot results when nature follows g:
1198+
1199+
```{code-cell} ipython3
1200+
fig_g, axes_g = plot_learning_results(results_g, π_0_scenarios, 'g', 0.0)
1201+
plt.show()
1202+
```
1203+
1204+
We observe a similar but symmetrical pattern.
1205+
1206+
```{solution-end}
1207+
```
1208+
1209+
```{exercise}
1210+
:label: lr_ex5
1211+
1212+
In the previous exercise, we specifically set the two beta distributions to be relatively close to each other.
1213+
1214+
That is to say, it is harder to distinguish between the two distributions.
1215+
1216+
Now let's explore an alternative scenario where the two distributions are further apart.
1217+
1218+
Specifically, we set $f \sim \text{Beta}(2, 5)$ and $g \sim \text{Beta}(5, 2)$.
1219+
1220+
Try to compare the learning dynamics in this scenario with the previous one using the simulation code we developed earlier.
1221+
```
1222+
1223+
```{solution-start} lr_ex5
1224+
:class: dropdown
1225+
```
1226+
1227+
Here is one solution
1228+
1229+
```{code-cell} ipython3
1230+
λ = 0.5
1231+
T = 40
1232+
N = 1000
1233+
1234+
F_a, F_b = 2, 5
1235+
G_a, G_b = 5, 2
1236+
1237+
f = jit(lambda x: p(x, F_a, F_b))
1238+
g = jit(lambda x: p(x, G_a, G_b))
1239+
1240+
π_0_scenarios = [
1241+
(0.3, 0.7),
1242+
(0.7, 0.3),
1243+
(0.1, 0.9),
1244+
]
1245+
1246+
s_seq_f = np.random.beta(F_a, F_b, (N, T))
1247+
s_seq_g = np.random.beta(G_a, G_b, (N, T))
1248+
1249+
results_f = {}
1250+
results_g = {}
1251+
1252+
for i, (π_0_1, π_0_2) in enumerate(π_0_scenarios):
1253+
# When nature follows f
1254+
results_f[i] = simulate_learning_blume_easley(
1255+
s_seq_f, f, g, π_0_1, π_0_2, λ)
1256+
# When nature follows g
1257+
results_g[i] = simulate_learning_blume_easley(
1258+
s_seq_g, f, g, π_0_1, π_0_2, λ)
1259+
```
1260+
1261+
Now let's visualize the results
1262+
1263+
```{code-cell} ipython3
1264+
fig_f, axes_f = plot_learning_results(results_f, π_0_scenarios, 'f', 1.0)
1265+
plt.show()
1266+
```
1267+
1268+
```{code-cell} ipython3
1269+
fig_g, axes_g = plot_learning_results(results_g, π_0_scenarios, 'g', 0.0)
1270+
plt.show()
1271+
```
1272+
1273+
In this case, it is easier to realize one's belief is incorrect, the belief adjust more quickly.
1274+
1275+
Observe that consumption shares also adjust more quickly.
1276+
1277+
```{solution-end}
1278+
```

0 commit comments

Comments
 (0)