|
| 1 | +import matplotlib.pyplot as plt |
1 | 2 | import streamlit as st |
2 | 3 |
|
3 | | - |
4 | | -def main() -> None: |
5 | | - """Print some documentation.""" |
6 | | - st.markdown("""## Default Strategy""") |
7 | | - st.markdown(r""" |
8 | | - $ |
9 | | - \textbf{motivation}(distance) = |
10 | | - \begin{cases} |
11 | | - 0 & \text{if\;} distance \geq \text{width}, \\ |
12 | | - e \cdot \text{height}\cdot\exp\left(\frac{1}{\left(\frac{distance}{\text{width}}\right)^2 - 1}\right) & \text{otherwise}. |
13 | | - \end{cases} |
14 | | - $ |
15 | | - |
16 | | - --- |
17 | | - --- |
18 | | - """) |
19 | | - st.markdown(r""" |
20 | | - ## EVC |
21 | | - $\textbf{motivation} = E\cdot V\cdot C,$ where |
22 | | - - $E$: expectancy |
23 | | - - $V$: value |
24 | | - - $C$: competition |
25 | | - |
26 | | - --- |
27 | | - """) |
28 | | - |
29 | | - st.markdown( |
30 | | - r""" |
31 | | - ### 1 Expectancy |
32 | | - A bell-like function with maximum height at distance=0, and it goes to 0 beyond width. |
33 | | -
|
34 | | - - **Local Maximum:** |
35 | | - Near the door ($distance \le width$), $E$ is relatively larger. |
36 | | - - **Decay to Zero:** |
37 | | - Once beyond a certain "influence zone" ($distance > width$), the expectancy drops to 0. |
38 | | - |
39 | | - --- |
40 | | - |
41 | | - $ |
42 | | - \textbf{expectancy}(distance) = |
43 | | - \begin{cases} |
44 | | - 0 & \text{if\;} distance \geq \text{width}, \\ |
45 | | - e \cdot \text{height}\cdot\exp\left(\frac{1}{\left(\frac{distance}{\text{width}}\right)^2 - 1}\right) & \text{otherwise}. |
46 | | - \end{cases}\\ |
47 | | - $ |
48 | | - |
49 | | - **Note:** this is the same function like default strategy |
50 | | - |
51 | | - --- |
52 | | - ### 2. Competition |
53 | | -
|
54 | | - 1. Early Competition: |
55 | | - At the start (up to $N_0$ departures), everyone competes for a reward or advantage. This phase can mimic a scenario where there is some strong external incentive for the first few people to escape. |
56 | | -
|
57 | | - 2. Gradual Decline: |
58 | | - Between $N_0$ and $\% N_{max}$, the reward or the "reason to compete" diminishes as more agents leave. |
59 | | -
|
60 | | - 3. No Competition: |
61 | | - Once a critical number (or fraction) of people have left, $C$ goes to 0. This suggests that no meaningful benefit remains for being among the next ones out. |
62 | | -
|
63 | | - Current implementation: |
64 | | - - $N$ is the number of agents who have already left the room. |
65 | | - - $C$ starts at $c_0$ and remains constant as long as $N<N_0$. |
66 | | - - After $N_0$ agents have left, the competition begins to drop linearly until eventually it hits 0 at $N=\% N_{max}$. |
67 | | -
|
68 | | - --- |
69 | | - |
70 | | - $$ |
71 | | - \textbf{competition} = |
72 | | -
|
73 | | - \begin{cases} |
74 | | - c_0 & \text{if } N \leq N_0 \\ |
75 | | - c_0 - \left(\frac{c_0}{\text{percent} \cdot N_{\text{max}} - N_0}\right) \cdot (N - N_0) & \text{if } N_0 < N < \text{percent} \cdot N_{\text{max}} \\ |
76 | | - 0 & \text{if } N \geq \text{percent} \cdot N_{\text{max}}, |
77 | | - \end{cases} |
78 | | - $$ |
79 | | - |
80 | | - --- |
81 | | - ##### 3 Value function |
82 | | -
|
83 | | - - Agents have a parameter $V_i$ that represents their intrinsic "care level" (low or high). |
84 | | - - Assign low or high values based on distance to exit, or with some probability distribution. |
85 | | - |
86 | | - $\textbf{value} = random\_number \in [v_{\min}, v_{\max}].$ |
87 | | - |
88 | | - We propose a method for assigning values to agents based on their spatial proximity to a designated exit. |
89 | | - In this approach, the likelihood that an agent receives a high value decays exponentially with distance from the exit. |
90 | | -
|
91 | | - ###### Distance Decay Parameter |
92 | | -
|
93 | | - A key parameter in this model is the **distance decay**, which regulates the rate at which the probability of being high value decreases with distance. The parameter is defined as: |
94 | | -
|
95 | | - $$ |
96 | | - \text{distance\_decay} = -\frac{\text{width}}{\ln(0.01)} |
97 | | - $$ |
98 | | - This formulation guarantees that the probability of an agent being assigned a high value is approximately 0.01 when the agent is at a distance equal to the defined `width` from the exit. |
99 | | -
|
100 | | - ###### Probability Calculation |
101 | | - ###### Distance Measurement |
102 | | - For an agent at position \(\mathbf{p} = (x, y)\) and an exit at \(\mathbf{p}_{\text{exit}} = (x_{\text{exit}}, y_{\text{exit}})\), the Euclidean distance is computed as: |
103 | | -
|
104 | | - $$ |
105 | | - d(\mathbf{p}, \mathbf{p}_{\text{exit}}) = \sqrt{(x - x_{\text{exit}})^2 + (y - y_{\text{exit}})^2} |
106 | | - $$ |
107 | | - ###### Exponential Decay Function |
108 | | - |
109 | | - The probability \(P(\mathbf{p})\) that an agent at \(\mathbf{p}\) is assigned a high value is given by: |
110 | | -
|
111 | | - $$ |
112 | | - P(\mathbf{p}) = \exp\left(-\frac{d(\mathbf{p}, \mathbf{p}_{\text{exit}})}{\text{distance\_decay}}\right) |
113 | | - $$ |
114 | | -
|
115 | | - This exponential decay ensures that agents closer to the exit have a higher probability of being designated as high value. |
116 | | -
|
117 | | - ###### Seed Management for Reproducibility |
118 | | -
|
119 | | - To maintain reproducibility in the randomness inherent in the assignment process, a **SeedManager** is used. Each random operation employs a derived seed, calculated as: |
120 | | -
|
121 | | - $$ |
122 | | - \text{derived\_seed} = \text{base\_seed} \times 1000 + \text{operation\_id} |
123 | | - $$ |
124 | | -
|
125 | | - This ensures that every operation, including the generation of random values for agents, is deterministic when the same base seed is used. |
126 | | -
|
127 | | - ##### Value Assignment Process |
128 | | -
|
129 | | - The final assignment of values to agents proceeds through the following steps: |
130 | | -
|
131 | | - 1. **Probability Computation:** |
132 | | - For each agent, the high value probability is calculated using the agent's distance from the exit. |
133 | | -
|
134 | | - 2. **Random Perturbation:** |
135 | | - To prevent strictly deterministic outcomes (especially when agents have similar distances), a small random factor is introduced: |
136 | | - |
137 | | - $$ |
138 | | - P' = P(\mathbf{p}) \times \left(1 + U[0, 0.2]\right) |
139 | | - $$ |
140 | | - |
141 | | - where \(U[0, 0.2]\) represents a uniformly distributed random variable between 0 and 0.2. |
142 | | -
|
143 | | - 3. **Sorting and Selection:** |
144 | | - Agents are sorted in descending order based on the perturbed probability \(P'\). The top \(N\) agents—where \(N\) is the predefined number of high value agents—are selected. |
145 | | -
|
146 | | - 4. **Final Value Generation:** |
147 | | - - **High Value Agents:** |
148 | | - Each agent in the selected set receives a value in the range |
149 | | - $$ [v_{\min}^{\text{high}}, v_{\max}^{\text{high}}]$$. |
150 | | - - **Low Value Agents:** |
151 | | - All remaining agents are assigned a value in the range $$[v_{\min}^{\text{low}}, v_{\max}^{\text{low}}]$$. |
152 | | - |
153 | | - Each value is generated using a random number generator seeded with the agent's derived seed. |
154 | | - |
155 | | - --- |
156 | | - ## Parameters |
157 | | -
|
158 | | - | Parameter | Meaning| Function| |
159 | | - |--------------|:-----:|:-----:| |
160 | | - |$N_0$ | Number of agents at which the decay of the function starts.| Competition| |
161 | | - |$N_{\max}$ | Initial number of agents in the simulation|Competition| |
162 | | - |$c_0$ | Maximal competition|Competition| |
163 | | - |$p$ | Percentage number $\in [0, 1]$.|Competition| |
164 | | - | |
165 | | - |$v_{\min}$| Mimimum value | Value| |
166 | | - |$v_{\max}$| Maximum value | Value| |
167 | | - | |
168 | | - |width| Range of influence | Expectancy| |
169 | | - |height| Amplitude of influence | Expectancy| |
170 | | - |
171 | | - |
172 | | - ## Update agents |
173 | | - For an agent $i$ we calculate $m_i$ by one of the methods above and update its parameters as follows: |
174 | | - $$ |
175 | | - m_i = V_i \cdot E_i \cdot C_i \in [0, 1] |
176 | | - $$ |
177 | | - |
178 | | - Then |
179 | | - $$ |
180 | | - \tilde v_i^0 = v_i^0\cdot V_i |
181 | | - $$ |
182 | | - This one-time scaling ensures that agents who "care more" start with a higher desired speed. |
183 | | - |
184 | | - and |
185 | | - $$ |
186 | | - \tilde T_i = \frac{T_i}{\Big(1+m_i\Big)}, |
187 | | - $$ |
188 | | - """, |
189 | | - unsafe_allow_html=True, |
| 4 | +from .motivation_mapping import ( |
| 5 | + AnchorValues, |
| 6 | + evaluate_gompertz, |
| 7 | + estimate_gompertz_from_anchors, |
| 8 | +) |
| 9 | + |
| 10 | + |
| 11 | +def _gompertz_example_plot() -> None: |
| 12 | + """Render a Gompertz example with anchor points.""" |
| 13 | + anchors = AnchorValues(low=0.5, normal=1.2, high=3.6) |
| 14 | + params = estimate_gompertz_from_anchors(anchors) |
| 15 | + m_values = [0.1 + i * (3.0 - 0.1) / 200 for i in range(201)] |
| 16 | + y_values = [evaluate_gompertz(m, params) for m in m_values] |
| 17 | + |
| 18 | + fig, ax = plt.subplots(figsize=(7, 4), constrained_layout=True) |
| 19 | + ax.plot(m_values, y_values, lw=2, label="Gompertz fit") |
| 20 | + ax.scatter([0.1, 1.0, 3.0], [0.5, 1.2, 3.6], c="red", zorder=3, label="anchors") |
| 21 | + ax.set_xlabel("Motivation m") |
| 22 | + ax.set_ylabel("Desired speed (m/s)") |
| 23 | + ax.set_title("Gompertz Example: desired speed mapping") |
| 24 | + ax.grid(alpha=0.3) |
| 25 | + ax.legend() |
| 26 | + st.pyplot(fig) |
| 27 | + |
| 28 | + |
| 29 | +def _gompertz_fit_mode_comparison_plot() -> None: |
| 30 | + """Render a side-by-side comparison of fit modes.""" |
| 31 | + anchors = AnchorValues(low=0.5, normal=1.2, high=3.6) |
| 32 | + params_exact = estimate_gompertz_from_anchors(anchors) |
| 33 | + params_sig = estimate_gompertz_from_anchors(anchors) |
| 34 | + |
| 35 | + # Import here to avoid circular app dependencies and keep docs self-contained. |
| 36 | + from .motivation_mapping import estimate_gompertz_sigmoid_preferred |
| 37 | + |
| 38 | + params_sig = estimate_gompertz_sigmoid_preferred(anchors, inflection_target=1.5) |
| 39 | + |
| 40 | + m_values = [0.1 + i * (3.0 - 0.1) / 300 for i in range(301)] |
| 41 | + y_exact = [evaluate_gompertz(m, params_exact) for m in m_values] |
| 42 | + y_sig = [evaluate_gompertz(m, params_sig) for m in m_values] |
| 43 | + |
| 44 | + fig, axes = plt.subplots(1, 2, figsize=(12, 4), constrained_layout=True) |
| 45 | + |
| 46 | + axes[0].plot(m_values, y_exact, lw=2, color="#1f77b4", label="exact_anchors") |
| 47 | + axes[0].scatter([0.1, 1.0, 3.0], [0.5, 1.2, 3.6], c="red", zorder=3) |
| 48 | + axes[0].set_title("exact_anchors") |
| 49 | + axes[0].set_xlabel("Motivation m") |
| 50 | + axes[0].set_ylabel("Desired speed (m/s)") |
| 51 | + axes[0].grid(alpha=0.3) |
| 52 | + axes[0].legend() |
| 53 | + |
| 54 | + axes[1].plot( |
| 55 | + m_values, y_sig, lw=2, color="#ff7f0e", label="sigmoid_preferred" |
190 | 56 | ) |
| 57 | + axes[1].scatter([0.1, 1.0, 3.0], [0.5, 1.2, 3.6], c="red", zorder=3) |
| 58 | + axes[1].set_title("sigmoid_preferred") |
| 59 | + axes[1].set_xlabel("Motivation m") |
| 60 | + axes[1].set_ylabel("Desired speed (m/s)") |
| 61 | + axes[1].grid(alpha=0.3) |
| 62 | + axes[1].legend() |
191 | 63 |
|
192 | | - st.markdown( |
193 | | - r""" |
194 | | - ## Runtime Mapping (Gompertz) |
| 64 | + st.pyplot(fig) |
195 | 65 |
|
196 | | - During simulation, the operational model parameters are mapped from motivation |
197 | | - with Gompertz curves and clamped motivation: |
198 | 66 |
|
199 | | - $$ |
200 | | - m_i^{\mathrm{used}} = \mathrm{clip}\left(m_i,\; m_{\min},\; \frac{3.6}{v_0^{\mathrm{normal}}}\right) |
201 | | - $$ |
| 67 | +def _logistic_example_plot() -> None: |
| 68 | + """Render an illustrative logistic-family example (docs only).""" |
| 69 | + lower = 0.5 |
| 70 | + upper = 3.6 |
| 71 | + x0 = 1.5 |
| 72 | + k = 2.4 |
202 | 73 |
|
203 | | - Each mapped parameter uses low/normal/high anchors at |
204 | | - $m \in \{0.1,\;1,\;3\}$: |
| 74 | + m_values = [0.1 + i * (3.0 - 0.1) / 300 for i in range(301)] |
| 75 | + y_values = [lower + (upper - lower) / (1.0 + pow(2.718281828, -k * (m - x0))) for m in m_values] |
205 | 76 |
|
206 | | - - desired speed $\tilde v_0(m)$ |
207 | | - - time gap $\tilde T(m)$ |
208 | | - - buffer $\tilde b(m)$ |
209 | | - - strength neighbor repulsion $\tilde A(m)$ |
| 77 | + fig, ax = plt.subplots(figsize=(7, 4), constrained_layout=True) |
| 78 | + ax.plot(m_values, y_values, lw=2, color="#2ca02c", label="logistic example") |
| 79 | + ax.scatter([0.1, 1.0, 3.0], [0.5, 1.2, 3.6], c="red", zorder=3, label="reference anchors") |
| 80 | + ax.set_xlabel("Motivation m") |
| 81 | + ax.set_ylabel("Desired speed (m/s)") |
| 82 | + ax.set_title("Logistic family example (illustrative only)") |
| 83 | + ax.grid(alpha=0.3) |
| 84 | + ax.legend() |
| 85 | + st.pyplot(fig) |
210 | 86 |
|
211 | | - The interaction range is fixed: |
212 | 87 |
|
213 | | - $$ |
214 | | - \tilde D(m)=d_{\text{ped}} \quad \text{(constant)} |
215 | | - $$ |
| 88 | +def main() -> None: |
| 89 | + """Documentation tab for current EVC + Gompertz model.""" |
| 90 | + st.markdown("## Motivation Model (EVC + Gompertz)") |
| 91 | + st.markdown( |
| 92 | + "This app uses **EVC motivation** and **Gompertz-based parameter mapping** only. " |
| 93 | + "The old **Default Strategy** is intentionally removed from this documentation." |
| 94 | + ) |
| 95 | + st.markdown("### Core equations") |
| 96 | + st.latex(r"M_i = E_i \cdot V_i \cdot C_i") |
| 97 | + st.latex( |
| 98 | + r"m_i^{\mathrm{used}} = \mathrm{clip}\left(m_i,\; m_{\min},\; \frac{3.6}{v_0^{\mathrm{normal}}}\right)" |
| 99 | + ) |
| 100 | + st.latex(r"y(m) = a \cdot \exp\left(-b \cdot \exp(-c \cdot m)\right)") |
| 101 | + st.markdown( |
| 102 | + "Where `y(m)` is one mapped operational parameter and `(a,b,c)` can be " |
| 103 | + "fitted from anchors or entered manually in the app." |
| 104 | + ) |
| 105 | + st.markdown( |
| 106 | + "- `fit_mode = exact_anchors`: exact match at low/normal/high.\n" |
| 107 | + "- `fit_mode = sigmoid_preferred`: approximate anchors, prefers an in-range inflection." |
| 108 | + ) |
| 109 | + |
| 110 | + st.markdown("### Requirements (Current)") |
| 111 | + st.markdown( |
216 | 112 | """ |
| 113 | +| Parameter | Low motivation | Normal motivation | High motivation | Rule | |
| 114 | +|---|---:|---:|---:|---| |
| 115 | +| Motivation (m) | 0.1 | 1.0 | 3.0 | Clamped by m in [m_min, 3.6 / v0_normal] | |
| 116 | +| Desired speed (v0_tilde) [m/s] | 0.5 | 1.2 | 3.6 | Gompertz | |
| 117 | +| Time gap (T_tilde) [s] | 2.0 | 1.0 | 0.01 | Gompertz | |
| 118 | +| Buffer (b_tilde) [m] | 1.0 | 0.1 | 0.0 | Gompertz | |
| 119 | +| Strength neighbor repulsion (A_tilde) | a_ped_min | a_ped | a_ped_max | Gompertz; anchors from config | |
| 120 | +| Range neighbor repulsion (D_tilde) | d_ped | d_ped | d_ped | Constant | |
| 121 | +""" |
| 122 | + ) |
| 123 | + |
| 124 | + st.markdown("### Gompertz plot") |
| 125 | + _gompertz_example_plot() |
| 126 | + st.markdown("### Fit mode comparison") |
| 127 | + st.markdown( |
| 128 | + "- `exact_anchors`: passes exactly through low/normal/high anchors.\n" |
| 129 | + "- `sigmoid_preferred`: allows small anchor mismatch to keep a stronger S-shape in-range." |
| 130 | + ) |
| 131 | + _gompertz_fit_mode_comparison_plot() |
| 132 | + st.markdown("### Logistic-family example (docs only)") |
| 133 | + st.markdown("This is an alternative functional family (not active in runtime code):") |
| 134 | + st.latex( |
| 135 | + r"y(m) = y_{\min} + \frac{y_{\max} - y_{\min}}{1 + \exp\left(-k\,(m - m_0)\right)}" |
217 | 136 | ) |
| 137 | + st.markdown("- `y_min, y_max`: lower and upper asymptotes") |
| 138 | + st.markdown("- `k`: steepness") |
| 139 | + st.markdown("- `m0`: inflection point") |
| 140 | + _logistic_example_plot() |
0 commit comments