Skip to content

Commit 2ea0317

Browse files
Copilotmmcky
andauthored
Upgrade all HTTP links to HTTPS across lecture series (#523)
* Initial plan * Upgrade all HTTP links to HTTPS across 12 lecture files Co-authored-by: mmcky <[email protected]> --------- Co-authored-by: Copilot <[email protected]> Co-authored-by: mmcky <[email protected]>
1 parent a62a223 commit 2ea0317

File tree

12 files changed

+33
-33
lines changed

12 files changed

+33
-33
lines changed

lectures/back_prop.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -535,7 +535,7 @@ Image(fig.to_image(format="png"))
535535
It is fun to think about how deepening the neural net for the above example affects the quality of approximation
536536
537537
538-
* If the network is too deep, you'll run into the [vanishing gradient problem](http://neuralnetworksanddeeplearning.com/chap5.html)
538+
* If the network is too deep, you'll run into the [vanishing gradient problem](https://neuralnetworksanddeeplearning.com/chap5.html)
539539
* Other parameters such as the step size and the number of epochs can be as important or more important than the number of layers in the situation considered in this lecture.
540540
* Indeed, since $f$ is a linear function of $x$, a one-layer network with the identity map as an activation would probably work best.
541541

lectures/finite_markov.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -213,11 +213,11 @@ One natural way to answer questions about Markov chains is to simulate them.
213213

214214
(To approximate the probability of event $E$, we can simulate many times and count the fraction of times that $E$ occurs).
215215

216-
Nice functionality for simulating Markov chains exists in [QuantEcon.py](http://quantecon.org/quantecon-py).
216+
Nice functionality for simulating Markov chains exists in [QuantEcon.py](https://quantecon.org/quantecon-py).
217217

218218
* Efficient, bundled with lots of other useful routines for handling Markov chains.
219219

220-
However, it's also a good exercise to roll our own routines --- let's do that first and then come back to the methods in [QuantEcon.py](http://quantecon.org/quantecon-py).
220+
However, it's also a good exercise to roll our own routines --- let's do that first and then come back to the methods in [QuantEcon.py](https://quantecon.org/quantecon-py).
221221

222222
In these exercises, we'll take the state space to be $S = 0,\ldots, n-1$.
223223

@@ -232,7 +232,7 @@ The Markov chain is then constructed as discussed above. To repeat:
232232

233233
To implement this simulation procedure, we need a method for generating draws from a discrete distribution.
234234

235-
For this task, we'll use `random.draw` from [QuantEcon](http://quantecon.org/quantecon-py), which works as follows:
235+
For this task, we'll use `random.draw` from [QuantEcon](https://quantecon.org/quantecon-py), which works as follows:
236236

237237
```{code-cell} python3
238238
ψ = (0.3, 0.7) # probabilities over {0, 1}
@@ -295,7 +295,7 @@ always close to 0.25, at least for the `P` matrix above.
295295

296296
### Using QuantEcon's Routines
297297

298-
As discussed above, [QuantEcon.py](http://quantecon.org/quantecon-py) has routines for handling Markov chains, including simulation.
298+
As discussed above, [QuantEcon.py](https://quantecon.org/quantecon-py) has routines for handling Markov chains, including simulation.
299299

300300
Here's an illustration using the same P as the preceding example
301301

@@ -307,7 +307,7 @@ X = mc.simulate(ts_length=1_000_000)
307307
np.mean(X == 0)
308308
```
309309

310-
The [QuantEcon.py](http://quantecon.org/quantecon-py) routine is [JIT compiled](https://python-programming.quantecon.org/numba.html#numba-link) and much faster.
310+
The [QuantEcon.py](https://quantecon.org/quantecon-py) routine is [JIT compiled](https://python-programming.quantecon.org/numba.html#numba-link) and much faster.
311311

312312
```{code-cell} ipython
313313
%time mc_sample_path(P, sample_size=1_000_000) # Our homemade code version
@@ -557,7 +557,7 @@ $$
557557
It's clear from the graph that this stochastic matrix is irreducible: we can eventually
558558
reach any state from any other state.
559559

560-
We can also test this using [QuantEcon.py](http://quantecon.org/quantecon-py)'s MarkovChain class
560+
We can also test this using [QuantEcon.py](https://quantecon.org/quantecon-py)'s MarkovChain class
561561

562562
```{code-cell} python3
563563
P = [[0.9, 0.1, 0.0],
@@ -776,7 +776,7 @@ One option is to regard solving system {eq}`eq:eqpsifixed` as an eigenvector pr
776776
$\psi$ such that $\psi = \psi P$ is a left eigenvector associated
777777
with the unit eigenvalue $\lambda = 1$.
778778
779-
A stable and sophisticated algorithm specialized for stochastic matrices is implemented in [QuantEcon.py](http://quantecon.org/quantecon-py).
779+
A stable and sophisticated algorithm specialized for stochastic matrices is implemented in [QuantEcon.py](https://quantecon.org/quantecon-py).
780780
781781
This is the one we recommend:
782782
@@ -867,7 +867,7 @@ The result tells us that the fraction of time the chain spends at state $x$ conv
867867
(new_interp_sd)=
868868
This gives us another way to interpret the stationary distribution --- provided that the convergence result in {eq}`llnfmc0` is valid.
869869
870-
The convergence asserted in {eq}`llnfmc0` is a special case of a law of large numbers result for Markov chains --- see [EDTC](http://johnstachurski.net/edtc.html), section 4.3.4 for some additional information.
870+
The convergence asserted in {eq}`llnfmc0` is a special case of a law of large numbers result for Markov chains --- see [EDTC](https://johnstachurski.net/edtc.html), section 4.3.4 for some additional information.
871871
872872
(mc_eg1-2)=
873873
### Example
@@ -1322,7 +1322,7 @@ $$
13221322
13231323
Tauchen's method {cite}`Tauchen1986` is the most common method for approximating this continuous state process with a finite state Markov chain.
13241324
1325-
A routine for this already exists in [QuantEcon.py](http://quantecon.org/quantecon-py) but let's write our own version as an exercise.
1325+
A routine for this already exists in [QuantEcon.py](https://quantecon.org/quantecon-py) but let's write our own version as an exercise.
13261326
13271327
As a first step, we choose
13281328
@@ -1363,13 +1363,13 @@ The exercise is to write a function `approx_markov(rho, sigma_u, m=3, n=7)` that
13631363
$\{x_0, \ldots, x_{n-1}\} \subset \mathbb R$ and $n \times n$ matrix
13641364
$P$ as described above.
13651365
1366-
* Even better, write a function that returns an instance of [QuantEcon.py's](http://quantecon.org/quantecon-py) MarkovChain class.
1366+
* Even better, write a function that returns an instance of [QuantEcon.py's](https://quantecon.org/quantecon-py) MarkovChain class.
13671367
```
13681368
13691369
```{solution} fm_ex3
13701370
:class: dropdown
13711371
1372-
A solution from the [QuantEcon.py](http://quantecon.org/quantecon-py) library
1372+
A solution from the [QuantEcon.py](https://quantecon.org/quantecon-py) library
13731373
can be found [here](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/markov/approximation.py).
13741374
13751375
```

lectures/kalman.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -506,11 +506,11 @@ In this case, for any initial choice of $\Sigma_0$ that is both non-negative and
506506
```{index} single: Kalman Filter; Programming Implementation
507507
```
508508

509-
The class `Kalman` from the [QuantEcon.py](http://quantecon.org/quantecon-py) package implements the Kalman filter
509+
The class `Kalman` from the [QuantEcon.py](https://quantecon.org/quantecon-py) package implements the Kalman filter
510510

511511
* Instance data consists of:
512512
* the moments $(\hat x_t, \Sigma_t)$ of the current prior.
513-
* An instance of the [LinearStateSpace](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/lss.py) class from [QuantEcon.py](http://quantecon.org/quantecon-py).
513+
* An instance of the [LinearStateSpace](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/lss.py) class from [QuantEcon.py](https://quantecon.org/quantecon-py).
514514

515515
The latter represents a linear state space model of the form
516516

@@ -530,7 +530,7 @@ $$
530530
Q := CC' \quad \text{and} \quad R := HH'
531531
$$
532532

533-
* The class `Kalman` from the [QuantEcon.py](http://quantecon.org/quantecon-py) package has a number of methods, some that we will wait to use until we study more advanced applications in subsequent lectures.
533+
* The class `Kalman` from the [QuantEcon.py](https://quantecon.org/quantecon-py) package has a number of methods, some that we will wait to use until we study more advanced applications in subsequent lectures.
534534
* Methods pertinent for this lecture are:
535535
* `prior_to_filtered`, which updates $(\hat x_t, \Sigma_t)$ to $(\hat x_t^F, \Sigma_t^F)$
536536
* `filtered_to_forecast`, which updates the filtering distribution to the predictive distribution -- which becomes the new prior $(\hat x_{t+1}, \Sigma_{t+1})$

lectures/lake_model.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -496,7 +496,7 @@ Thus, the percentages of time that an infinitely lived worker spends employed
496496

497497
How long does it take for time series sample averages to converge to cross-sectional averages?
498498

499-
We can use [QuantEcon.py's](http://quantecon.org/quantecon-py)
499+
We can use [QuantEcon.py's](https://quantecon.org/quantecon-py)
500500
MarkovChain class to investigate this.
501501

502502
Let's plot the path of the sample averages over 5,000 periods

lectures/linear_models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1334,7 +1334,7 @@ Weaker sufficient conditions for convergence associate eigenvalues equaling or
13341334
## Code
13351335

13361336
Our preceding simulations and calculations are based on code in
1337-
the file [lss.py](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/lss.py) from the [QuantEcon.py](http://quantecon.org/quantecon-py) package.
1337+
the file [lss.py](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/lss.py) from the [QuantEcon.py](https://quantecon.org/quantecon-py) package.
13381338

13391339
The code implements a class for handling linear state space models (simulations, calculating moments, etc.).
13401340

lectures/lqcontrol.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -556,7 +556,7 @@ for $t = 0, \ldots, T-1$ attains the minimum of {eq}`lq_object` subject to our c
556556
## Implementation
557557

558558
We will use code from [lqcontrol.py](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/lqcontrol.py)
559-
in [QuantEcon.py](http://quantecon.org/quantecon-py)
559+
in [QuantEcon.py](https://quantecon.org/quantecon-py)
560560
to solve finite and infinite horizon linear quadratic control problems.
561561

562562
In the module, the various updating, simulation and fixed point methods

lectures/markov_perf.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -335,7 +335,7 @@ This is the approach we adopt in the next section.
335335

336336
### Implementation
337337

338-
We use the function [nnash](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/lqnash.py) from [QuantEcon.py](http://quantecon.org/quantecon-py) that computes a Markov perfect equilibrium of the infinite horizon linear-quadratic dynamic game in the manner described above.
338+
We use the function [nnash](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/lqnash.py) from [QuantEcon.py](https://quantecon.org/quantecon-py) that computes a Markov perfect equilibrium of the infinite horizon linear-quadratic dynamic game in the manner described above.
339339

340340
## Application
341341

@@ -439,7 +439,7 @@ From these, we compute the infinite horizon MPE using the preceding code
439439

440440
Running the code produces the following output.
441441

442-
One way to see that $F_i$ is indeed optimal for firm $i$ taking $F_2$ as given is to use [QuantEcon.py](http://quantecon.org/quantecon-py)'s LQ class.
442+
One way to see that $F_i$ is indeed optimal for firm $i$ taking $F_2$ as given is to use [QuantEcon.py](https://quantecon.org/quantecon-py)'s LQ class.
443443

444444
In particular, let's take F2 as computed above, plug it into {eq}`eq_mpe_p1p` and {eq}`eq_mpe_p1d` to get firm 1's problem and solve it using LQ.
445445

@@ -520,7 +520,7 @@ Replicate the {ref}`pair of figures <mpe_vs_monopolist>` showing the comparison
520520
521521
Parameters are as in duopoly_mpe.py and you can use that code to compute MPE policies under duopoly.
522522
523-
The optimal policy in the monopolist case can be computed using [QuantEcon.py](http://quantecon.org/quantecon-py)'s LQ class.
523+
The optimal policy in the monopolist case can be computed using [QuantEcon.py](https://quantecon.org/quantecon-py)'s LQ class.
524524
```
525525

526526
```{solution-start} mp_ex1

lectures/optgrowth.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ model studied in
3434

3535
* {cite}`StokeyLucas1989`, chapter 2
3636
* {cite}`Ljungqvist2012`, section 3.1
37-
* [EDTC](http://johnstachurski.net/edtc.html), chapter 1
37+
* [EDTC](https://johnstachurski.net/edtc.html), chapter 1
3838
* {cite}`Sundaram1996`, chapter 12
3939

4040
It is an extension of the simple {doc}`cake eating problem <cake_eating_problem>` we looked at earlier.
@@ -282,7 +282,7 @@ The term $\int v(f(y - c) z) \phi(dz)$ can be understood as the expected next pe
282282
* the state is $y$
283283
* consumption is set to $c$
284284

285-
As shown in [EDTC](http://johnstachurski.net/edtc.html), theorem 10.1.11 and a range of other texts
285+
As shown in [EDTC](https://johnstachurski.net/edtc.html), theorem 10.1.11 and a range of other texts
286286

287287
> *The value function* $v^*$ *satisfies the Bellman equation*
288288
@@ -328,7 +328,7 @@ In our setting, we have the following key result
328328
The intuition is similar to the intuition for the Bellman equation, which was
329329
provided after {eq}`fpb30`.
330330

331-
See, for example, theorem 10.1.11 of [EDTC](http://johnstachurski.net/edtc.html).
331+
See, for example, theorem 10.1.11 of [EDTC](https://johnstachurski.net/edtc.html).
332332

333333
Hence, once we have a good approximation to $v^*$, we can compute the
334334
(approximately) optimal policy by computing the corresponding greedy policy.
@@ -389,7 +389,7 @@ $$
389389
\rho(g, h) = \sup_{y \geq 0} |g(y) - h(y)|
390390
$$
391391

392-
See [EDTC](http://johnstachurski.net/edtc.html), lemma 10.1.18.
392+
See [EDTC](https://johnstachurski.net/edtc.html), lemma 10.1.18.
393393

394394
Hence, it has exactly one fixed point in this set, which we know is equal to the value function.
395395

@@ -404,7 +404,7 @@ This iterative method is called **value function iteration**.
404404
We also know that a feasible policy is optimal if and only if it is $v^*$-greedy.
405405

406406
It's not too hard to show that a $v^*$-greedy policy exists
407-
(see [EDTC](http://johnstachurski.net/edtc.html), theorem 10.1.11 if you get stuck).
407+
(see [EDTC](https://johnstachurski.net/edtc.html), theorem 10.1.11 if you get stuck).
408408

409409
Hence, at least one optimal policy exists.
410410

@@ -426,7 +426,7 @@ Unfortunately, they tend to be case-specific, as opposed to valid for a large ra
426426
Nevertheless, their main conclusions are usually in line with those stated for
427427
the bounded case just above (as long as we drop the word "bounded").
428428

429-
Consult, for example, section 12.2 of [EDTC](http://johnstachurski.net/edtc.html), {cite}`Kamihigashi2012` or {cite}`MV2010`.
429+
Consult, for example, section 12.2 of [EDTC](https://johnstachurski.net/edtc.html), {cite}`Kamihigashi2012` or {cite}`MV2010`.
430430

431431
## Computation
432432

lectures/perm_income_cons.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ In this lecture, we'll
6060

6161
* show how the solution to the LQ permanent income model can be obtained using LQ control methods.
6262
* represent the model as a linear state space system as in {doc}`this lecture <linear_models>`.
63-
* apply [QuantEcon](http://quantecon.org/quantecon-py)'s [LinearStateSpace](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/lss.py) class to characterize statistical features of the consumer's optimal consumption and borrowing plans.
63+
* apply [QuantEcon](https://quantecon.org/quantecon-py)'s [LinearStateSpace](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/lss.py) class to characterize statistical features of the consumer's optimal consumption and borrowing plans.
6464

6565
We'll then use these characterizations to construct a simple model of cross-section wealth and
6666
consumption dynamics in the spirit of Truman Bewley {cite}`Bewley86`.
@@ -204,7 +204,7 @@ $$
204204

205205
Here we solve the same model using {doc}`LQ methods <lqcontrol>` based on dynamic programming.
206206

207-
After confirming that answers produced by the two methods agree, we apply [QuantEcon](http://quantecon.org/quantecon-py)'s [LinearStateSpace](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/lss.py)
207+
After confirming that answers produced by the two methods agree, we apply [QuantEcon](https://quantecon.org/quantecon-py)'s [LinearStateSpace](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/lss.py)
208208
class to illustrate features of the model.
209209

210210
Why solve a model in two distinct ways?

lectures/rational_expectations.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -576,7 +576,7 @@ Let the firm's belief function $H$ be as given in {eq}`ree_hlom2`.
576576
577577
Formulate the firm's problem as a discounted optimal linear regulator problem, being careful to describe all of the objects needed.
578578
579-
Use the class `LQ` from the [QuantEcon.py](http://quantecon.org/quantecon-py) package to solve the firm's problem for the following parameter values:
579+
Use the class `LQ` from the [QuantEcon.py](https://quantecon.org/quantecon-py) package to solve the firm's problem for the following parameter values:
580580
581581
$$
582582
a_0= 100, a_1= 0.05, \beta = 0.95, \gamma=10, \kappa_0 = 95.5, \kappa_1 = 0.95

0 commit comments

Comments
 (0)