|
42 | 42 | "source": [ |
43 | 43 | "### Monte Carlo integration\n", |
44 | 44 | "\n", |
45 | | - "As a baseline we can solve this using quasi-Monte Carlo integration:" |
| 45 | + "As a baseline we can solve this using quasi-Monte Carlo integration. It requires no modification compared to the stochastic independent case. It consists of generating samples:" |
46 | 46 | ] |
47 | 47 | }, |
48 | 48 | { |
|
51 | 51 | "metadata": {}, |
52 | 52 | "outputs": [], |
53 | 53 | "source": [ |
54 | | - "samples = distribution.sample(10**5, rule=\"sobol\")\n", |
55 | | - "evaluations = numpy.array([exponential_model(sample) for sample in samples.T])" |
| 54 | + "samples = distribution.sample(10**5, rule=\"sobol\")" |
| 55 | + ] |
| 56 | + }, |
| 57 | + { |
| 58 | + "cell_type": "markdown", |
| 59 | + "metadata": {}, |
| 60 | + "source": [ |
| 61 | + "evaluate model for each sample:" |
56 | 62 | ] |
57 | 63 | }, |
58 | 64 | { |
59 | 65 | "cell_type": "code", |
60 | 66 | "execution_count": 3, |
61 | 67 | "metadata": {}, |
| 68 | + "outputs": [], |
| 69 | + "source": [ |
| 70 | + "evaluations = numpy.array([exponential_model(sample) for sample in samples.T])" |
| 71 | + ] |
| 72 | + }, |
| 73 | + { |
| 74 | + "cell_type": "markdown", |
| 75 | + "metadata": {}, |
| 76 | + "source": [ |
| 77 | + "and performing analysis on samples:" |
| 78 | + ] |
| 79 | + }, |
| 80 | + { |
| 81 | + "cell_type": "code", |
| 82 | + "execution_count": 4, |
| 83 | + "metadata": {}, |
62 | 84 | "outputs": [ |
63 | 85 | { |
64 | 86 | "data": { |
|
67 | 89 | " array([0.9999 , 0.9815 , 0.96431, 0.94833, 0.93353]))" |
68 | 90 | ] |
69 | 91 | }, |
70 | | - "execution_count": 3, |
| 92 | + "execution_count": 4, |
71 | 93 | "metadata": {}, |
72 | 94 | "output_type": "execute_result" |
73 | 95 | } |
|
84 | 106 | "cell_type": "markdown", |
85 | 107 | "metadata": {}, |
86 | 108 | "source": [ |
87 | | - "Plotting the final results:" |
| 109 | + "We can also plot the final result:" |
88 | 110 | ] |
89 | 111 | }, |
90 | 112 | { |
91 | 113 | "cell_type": "code", |
92 | | - "execution_count": 4, |
| 114 | + "execution_count": 5, |
93 | 115 | "metadata": {}, |
94 | 116 | "outputs": [ |
95 | 117 | { |
|
124 | 146 | "\n", |
125 | 147 | "Polynomial chaos expansions builds on the assumption of having an orthogonal polynomial expansion. However, the classical extension to the multivariate case assumes that the probabilty distribution consist of stochastically independent components. If the distribution has dependencies, the classical approach will not work.\n", |
126 | 148 | "\n", |
127 | | - "The recommended approach for addressing dependent distribution is to use Generalized polynomial chaos expansion (g-pce). It assumes that there exists a smooth map $T$ between the dependent variables $Q$ and some other stochastic independent variables $R$, which we can build an expansion for. In other words:\n", |
| 149 | + "The recommended approach for addressing dependent distribution is to use *generalized polynomial chaos expansion* (g-pce). It assumes that there exists a smooth map $T$ between the dependent variables $Q$ and some other stochastic independent variables $R$, which we can build an expansion for. In other words:\n", |
128 | 150 | "\n", |
129 | 151 | "$$\n", |
130 | 152 | "\\hat u(x, q) = \\hat u(x, T(r)) = \\sum_{n=0}^N c_n \\Phi_n(r)\n", |
|
135 | 157 | }, |
136 | 158 | { |
137 | 159 | "cell_type": "code", |
138 | | - "execution_count": 5, |
| 160 | + "execution_count": 6, |
139 | 161 | "metadata": {}, |
140 | 162 | "outputs": [], |
141 | 163 | "source": [ |
|
148 | 170 | "metadata": {}, |
149 | 171 | "source": [ |
150 | 172 | "The $T$ is defined as a double Rosenblatt transformation:\n", |
| 173 | + "\n", |
151 | 174 | "$$\n", |
152 | 175 | "T(r) = F_Q^{-1}\\left( F_R(r) \\right)\n", |
153 | 176 | "$$\n", |
| 177 | + "\n", |
154 | 178 | "which in `chaospy` can be constructed as follows:" |
155 | 179 | ] |
156 | 180 | }, |
157 | 181 | { |
158 | 182 | "cell_type": "code", |
159 | | - "execution_count": 6, |
| 183 | + "execution_count": 7, |
160 | 184 | "metadata": {}, |
161 | 185 | "outputs": [], |
162 | 186 | "source": [ |
|
182 | 206 | }, |
183 | 207 | { |
184 | 208 | "cell_type": "code", |
185 | | - "execution_count": 7, |
| 209 | + "execution_count": 8, |
186 | 210 | "metadata": {}, |
187 | 211 | "outputs": [], |
188 | 212 | "source": [ |
|
199 | 223 | }, |
200 | 224 | { |
201 | 225 | "cell_type": "code", |
202 | | - "execution_count": 8, |
| 226 | + "execution_count": 9, |
203 | 227 | "metadata": {}, |
204 | 228 | "outputs": [], |
205 | 229 | "source": [ |
|
214 | 238 | "source": [ |
215 | 239 | "Note that for generating the expansion and the model approximation, we use the distribution from $R$, while for the model evalutation we use the transformed samples from $Q$.\n", |
216 | 240 | "\n", |
217 | | - "The solution model, defined with respect to $R$ can then be used to do analysis:" |
| 241 | + "The solution model can then be used to do analysis. Just remember that the model is defined with respect to $R$ , not $Q$:" |
218 | 242 | ] |
219 | 243 | }, |
220 | 244 | { |
221 | 245 | "cell_type": "code", |
222 | | - "execution_count": 9, |
| 246 | + "execution_count": 10, |
223 | 247 | "metadata": {}, |
224 | 248 | "outputs": [ |
225 | 249 | { |
|
229 | 253 | " array([1. , 0.98159, 0.9644 , 0.94841, 0.93361]))" |
230 | 254 | ] |
231 | 255 | }, |
232 | | - "execution_count": 9, |
| 256 | + "execution_count": 10, |
233 | 257 | "metadata": {}, |
234 | 258 | "output_type": "execute_result" |
235 | 259 | } |
|
251 | 275 | }, |
252 | 276 | { |
253 | 277 | "cell_type": "code", |
254 | | - "execution_count": 10, |
| 278 | + "execution_count": 11, |
255 | 279 | "metadata": {}, |
256 | 280 | "outputs": [ |
257 | 281 | { |
|
282 | 306 | "source": [ |
283 | 307 | "### Pseudo-Spectral Projection\n", |
284 | 308 | "\n", |
285 | | - "Implementing pseudo-spectral projection using generalized polynomial chaos expansion requires to look at the integration of the g-pce model, as quadrature at its essence is an integration techinque.\n", |
286 | | - "For example, the mean of the model is defined as:\n", |
287 | | - "\n", |
288 | | - "$$\n", |
289 | | - "\\int \\hat u(x, q) p_Q(q) d\\,q\n", |
290 | | - "$$\n", |
291 | | - "\n", |
292 | | - "where $p_Q$ is the probability density function.\n", |
293 | | - "Replacing $q = T(r)$, will in addition to the substitution itself, require an Jacobi determinant.\n", |
294 | | - "To simplify, we first redefine the substitution to be:\n", |
295 | | - "$$\n", |
296 | | - "F_Q(q) = F_R(r)\n", |
297 | | - "$$\n", |
298 | | - "\n", |
299 | | - "With it, the Jacobi determinant. Since the derivative of the Rosenblatt transformation is the joint probability density function, we get:\n", |
300 | | - "$$\n", |
301 | | - "p_Q(q) d\\, q = p_R(r) d\\, r\n", |
302 | | - "$$\n", |
303 | | - "or\n", |
304 | | - "$$\n", |
305 | | - "d\\, q = p_R(r)/p_Q(q) d\\, r\n", |
306 | | - "$$\n", |
307 | | - "\n", |
308 | | - "which means for our original substitution can be formulated as follows:\n", |
309 | | - "$$\n", |
310 | | - "\\int \\hat u(x, q) p_Q(q) d\\,q = \\int \\hat u(x, T(r)) p_R(r) \\tfrac{p_Q(T(r))}{p_R(r)} d\\, r\n", |
311 | | - "$$\n", |
312 | | - "\n", |
313 | | - "In other words, when we make the substitution we need to include a term for the Jacobi.\n", |
314 | | - "This can be done by for example adjusting the weights:" |
| 309 | + "Implementing g-pce for pseudo-spectral projection require us to generate nodes and weights from $R$ and transform the nodes using $T$:" |
315 | 310 | ] |
316 | 311 | }, |
317 | 312 | { |
318 | 313 | "cell_type": "code", |
319 | | - "execution_count": 11, |
| 314 | + "execution_count": 12, |
320 | 315 | "metadata": {}, |
321 | 316 | "outputs": [], |
322 | 317 | "source": [ |
323 | | - "def adjust_weights(nodes, weights):\n", |
324 | | - " return \n", |
325 | | - "\n", |
326 | 318 | "nodes_r, weights_r = chaospy.generate_quadrature(10, distribution_r, rule=\"gaussian\")\n", |
327 | | - "nodes_q = transform(nodes_r)\n", |
328 | | - "weights_r_adj = weights_r*distribution_q.pdf(nodes_q)/distribution_r.pdf(nodes_r)" |
| 319 | + "nodes_q = transform(nodes_r)" |
329 | 320 | ] |
330 | 321 | }, |
331 | 322 | { |
|
337 | 328 | }, |
338 | 329 | { |
339 | 330 | "cell_type": "code", |
340 | | - "execution_count": 12, |
| 331 | + "execution_count": 13, |
341 | 332 | "metadata": {}, |
342 | 333 | "outputs": [], |
343 | 334 | "source": [ |
344 | 335 | "expansion = chaospy.generate_expansion(7, distribution_r)\n", |
345 | 336 | "evaluations = numpy.array([exponential_model(sample) for sample in nodes_q.T])\n", |
346 | | - "model_approx = chaospy.fit_quadrature(expansion, nodes_r, weights_r_adj, evaluations)" |
| 337 | + "model_approx = chaospy.fit_quadrature(expansion, nodes_r, weights_r, evaluations)" |
347 | 338 | ] |
348 | 339 | }, |
349 | 340 | { |
|
357 | 348 | }, |
358 | 349 | { |
359 | 350 | "cell_type": "code", |
360 | | - "execution_count": 13, |
| 351 | + "execution_count": 14, |
361 | 352 | "metadata": {}, |
362 | 353 | "outputs": [ |
363 | 354 | { |
|
367 | 358 | " array([1. , 0.98159, 0.9644 , 0.94841, 0.93361]))" |
368 | 359 | ] |
369 | 360 | }, |
370 | | - "execution_count": 13, |
| 361 | + "execution_count": 14, |
371 | 362 | "metadata": {}, |
372 | 363 | "output_type": "execute_result" |
373 | 364 | } |
|
389 | 380 | }, |
390 | 381 | { |
391 | 382 | "cell_type": "code", |
392 | | - "execution_count": 14, |
| 383 | + "execution_count": 15, |
393 | 384 | "metadata": {}, |
394 | 385 | "outputs": [ |
395 | 386 | { |
|
428 | 419 | }, |
429 | 420 | { |
430 | 421 | "cell_type": "code", |
431 | | - "execution_count": 15, |
| 422 | + "execution_count": 16, |
432 | 423 | "metadata": {}, |
433 | 424 | "outputs": [ |
434 | 425 | { |
|
438 | 429 | " -0.9*q1**2+q0*q1-8.2*q1-q0+9.1])" |
439 | 430 | ] |
440 | 431 | }, |
441 | | - "execution_count": 15, |
| 432 | + "execution_count": 16, |
442 | 433 | "metadata": {}, |
443 | 434 | "output_type": "execute_result" |
444 | 435 | } |
|
458 | 449 | }, |
459 | 450 | { |
460 | 451 | "cell_type": "code", |
461 | | - "execution_count": 16, |
| 452 | + "execution_count": 17, |
462 | 453 | "metadata": {}, |
463 | 454 | "outputs": [ |
464 | 455 | { |
|
476 | 467 | " [ 0., -0., -0., 0., 0., 0., -0., 0., 0., 1.]])" |
477 | 468 | ] |
478 | 469 | }, |
479 | | - "execution_count": 16, |
| 470 | + "execution_count": 17, |
480 | 471 | "metadata": {}, |
481 | 472 | "output_type": "execute_result" |
482 | 473 | } |
|
494 | 485 | }, |
495 | 486 | { |
496 | 487 | "cell_type": "code", |
497 | | - "execution_count": 17, |
| 488 | + "execution_count": 18, |
498 | 489 | "metadata": {}, |
499 | 490 | "outputs": [], |
500 | 491 | "source": [ |
|
505 | 496 | }, |
506 | 497 | { |
507 | 498 | "cell_type": "code", |
508 | | - "execution_count": 18, |
| 499 | + "execution_count": 19, |
509 | 500 | "metadata": {}, |
510 | 501 | "outputs": [ |
511 | 502 | { |
|
515 | 506 | " array([1. , 0.98159, 0.9644 , 0.94841, 0.93361]))" |
516 | 507 | ] |
517 | 508 | }, |
518 | | - "execution_count": 18, |
| 509 | + "execution_count": 19, |
519 | 510 | "metadata": {}, |
520 | 511 | "output_type": "execute_result" |
521 | 512 | } |
|
530 | 521 | }, |
531 | 522 | { |
532 | 523 | "cell_type": "code", |
533 | | - "execution_count": 19, |
| 524 | + "execution_count": 20, |
534 | 525 | "metadata": {}, |
535 | 526 | "outputs": [ |
536 | 527 | { |
|
559 | 550 | "cell_type": "markdown", |
560 | 551 | "metadata": {}, |
561 | 552 | "source": [ |
562 | | - "For more details on this methodology, see\n", |
| 553 | + "For more details on this methodology, you can read the journal article\n", |
563 | 554 | "[Multivariate Polynomial Chaos Expansions with Dependent Variables](https://doi.org/10.1137/15M1020447)." |
564 | 555 | ] |
565 | 556 | } |
|
0 commit comments