You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: LITERATURE.md
+20Lines changed: 20 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -436,3 +436,23 @@ out-of-sample risk and return characteristics. In addition, a practical approach
436
436
under box and group constraints is introduced. A comprehensive set of portfolio simulations over 6 equity
437
437
universes demonstrates the appeal of the algorithm for portfolios consisting of 20 − 200 assets. HRP delivers
438
438
highly diversified allocations with low volatility, low portfolio turnover and competitive performance metrics
439
+
440
+
## Schur Complement and Symmetric Positive Semidefinite Matrices [pdf](https://www.cis.upenn.edu/~jean/schur-comp.pdf)
441
+
Jean Gallier
442
+
443
+
In this note, we provide some details and proofs of some results from Appendix A.5 (especially
444
+
Section A.5.5) of Convex Optimization by Boyd and Vandenberghe.
445
+
446
+
447
+
## Fast and accurate techniques for computing schur complements and performing numerical coarse graining [slides](https://amath.colorado.edu/faculty/martinss/Talks/2009_banff.pdf)
448
+
Gunnar Martinsson
449
+
450
+
## A Closer look at the Minimum-Variance Portfolio Optimization Model [pdf](https://downloads.hindawi.com/journals/mpe/2019/1452762.pdf)
451
+
Zhifeng Dai
452
+
453
+
Recently, by imposing the regularization term to objective function or additional norm constraint to portfolio weights, a number
454
+
of alternative portfolio strategies have been proposed to improve the empirical performance of the minimum-variance portfolio.
455
+
In this paper, we firstly examine the relation between the weight norm-constrained method and the objective function regularization method in minimum-variance problems by analyzing the Karush–Kuhn–Tucker conditions of their Lagrangian
456
+
functions. We give the range of parameters for the two models and the corresponding relationship of parameters. Given the range
457
+
and manner of parameter selection, it will help researchers and practitioners better understand and apply the relevant portfolio
458
+
models. We apply these models to construct optimal portfolios and test the proposed propositions by employing real market data
A collection of autonomous incremental estimators for covariance, precision, correlationand associated quantities.
3
+
A collection of incremental estimators for covariance, precision, correlation, portfolios and ensembles.
4
4
5
5
## TLDR: "Just a pile of functions that forecast covariance in online fashion"
6
6
The [running_empirical_covariance](https://github.com/microprediction/precise/blob/main/examples_colab_notebooks/running_empirical_population_covariance.ipynb) colab notebook illustrates the style. To see all the other online methods of covariance estimation supplied here, run the [cov skaters manifest](https://github.com/microprediction/precise/blob/main/examples_colab_notebooks/list_all_cov_methods.ipynb) notebook. Or to look at Elo ratings,
7
-
run the [elo_ratings_and_urls](https://github.com/microprediction/precise/blob/main/examples_colab_notebooks/elo_ratings_and_code_urls.ipynb). Oh and if you are looking for M6 example entries, they are [here](https://github.com/microprediction/precise/tree/main/examples_m6/full).
7
+
run the [elo_ratings_and_urls](https://github.com/microprediction/precise/blob/main/examples_colab_notebooks/elo_ratings_and_code_urls.ipynb).
Similar in style to skaters used in the [timemachines](https://github.com/microprediction/timemachines) package, this package may be thought of as a collection of covariance prediction functions taking one vector at a time, and also the prior state, and spitting out a prediction mean vector *x*, a prediction covariance *x_cov*, and a posterior state whose interpretation is the responsibility of the skater, not the caller.
This mildly unusual convention requires the caller to maintain state from one call to the next:
20
+
## M6 Financial forecasting contest utilities
21
+
You *could* use this library to enter the M6 Financial Forecasting competition:
22
+
23
+
1. Pick a cov estimator (i.e. a "cov skater"), if you wish
24
+
2. Pick a portfolio generator, if you wish
25
+
3. Pick extra shrinkage params, if you wish
26
+
4. Pick love and hate ticker lists, if you wish
27
+
28
+
See [precise/examples_m6](https://github.com/microprediction/precise/tree/main/examples_m6) and register at the [m6 competition](https://m6competition.com/). See disclaimer below.
29
+
30
+
## Covariance skaters
31
+
Similar in style to skaters used in the [timemachines](https://github.com/microprediction/timemachines) package, this package may be thought of as a collection of covariance prediction functions taking one vector at a time, and also the prior state, and spitting out a prediction mean vector *x*, a prediction covariance *x_cov*, and a posterior state whose interpretation is the responsibility of the skater, not the caller.
17
32
18
33
from precise.skatertools.syntheticdata.miscellaneous import create_correlated_dataset
19
34
from precise.skaters.covariance.runemmp import run_emp_pcov_d0 # <-- Running empirical population covariance
@@ -26,28 +41,25 @@ This mildly unusual convention requires the caller to maintain state from one ca
26
41
x, x_cov, s = run_emp_pcov_d0(s=s, y=y)
27
42
pprint(x_cov)
28
43
29
-
See [/examples_basic_usage](https://github.com/microprediction/precise/tree/main/examples_basic_usage).
44
+
See [/examples_basic_usage](https://github.com/microprediction/precise/tree/main/examples_basic_usage). And yes, this mildly unusual convention requires the caller to maintain state from one call to the next: See the timemachines [faq](https://github.com/microprediction/timemachines/blob/main/FAQ.md) for justification of this style.
30
45
31
-
See the timemachines [faq](https://github.com/microprediction/timemachines/blob/main/FAQ.md) for justification of this style.
32
-
33
-
### Skater Elo ratings
46
+
### Elo ratings
34
47
35
48
As noted, see the [elo_ratings_and_urls](https://github.com/microprediction/precise/blob/main/examples_colab_notebooks/elo_ratings_and_code_urls.ipynb).
36
49
37
50
### Browsing for skaters
38
51
39
52
You can hunt for skaters other than *run_emp_pcov_d0* in [precise/skaters/covariance](https://github.com/microprediction/precise/tree/main/precise/skaters/covariance). There are some location utilities in [precise/whereami](https://github.com/microprediction/precise/blob/main/precise/whereami.py).
| buf_huber_pcov_d1_a1_b2_n50 |[skaters/covariance/bufhuber](https://github.com/microprediction/precise/blob/main/precise/skaters/covariance/bufhuber.py)| Applies an approach that exploits Huber pseudo-means to a buffer of data of length 50 in need of differencing once, with generalized Huber loss parameters a=1, b=2. |
46
60
| buf_sk_ld_pcov_d0_n100 |[skaters/covariance/bufsk](https://github.com/microprediction/precise/blob/main/precise/skaters/covariance/bufsk.py)| Applies sk-learn's implementation of Ledoit-Wolf to stationary buffered data of length 100 |
47
61
| ewa_pm_emp_scov_r01 |[skaters/covariance/ewapartial](https://github.com/microprediction/precise/blob/main/precise/skaters/covariance/ewapartial.py)| Performs an incremental, recency-weighted sample covariance estimate that exploits partial moments. Uses a memory parameter r=0.01 |
48
62
49
-
### Reading skater names
50
-
51
63
Broad calculation style categories
52
64
53
65
| Shorthand | Interpretation | Incremental ? |
@@ -87,36 +99,27 @@ Differencing hints:
87
99
| d1 | For use on data that is iid after taking one difference |
88
100
89
101
90
-
91
-
## Portfolio & mixture of experts
92
-
93
-
See the portfolio directories in [skaters](https://github.com/microprediction/precise/tree/main/precise/skaters). Work in progress.
94
-
95
-
## M6 Financial forecasting contest utilities
96
-
97
-
You can use this library to enter M6 as follows:
98
-
99
-
1. Pick a cov estimator (i.e. a "cov skater"), if you wish
100
-
2. Pick a portfolio generator, if you wish
101
-
3. Pick extra shrinkage params, if you wish
102
-
4. Pick love and hate ticker lists, if you wish
103
-
104
-
See [precise/examples_m6](https://github.com/microprediction/precise/tree/main/examples_m6).
105
-
106
-
## Stand-alone utilities
102
+
### Stand-alone covariance utilities
107
103
108
104
1. The [covariance/statefunctions](https://github.com/microprediction/precise/blob/main/precise/skaters/covarianceutil/statefunctions.py) are illustrated by the example [running_oas_covariance](https://github.com/microprediction/precise/blob/main/examples_basic_usage/running_oas_covariance.py).
109
105
2. State [covariatnce/statemutations](https://github.com/microprediction/precise/blob/main/precise/skaters/covarianceutil/statemutations.py) do things like ensuring both covariance and precision matrices exist in the state. Or for instance: s = both_cov(s) ensures both sample and population covariances are present.
110
106
3. Some [/covariance/datascatterfunctions](https://github.com/microprediction/precise/blob/main/precise/skaters/covarianceutil/datascatterfunctions.py)
111
-
4. The [/covariance/datacovfunctions](https://github.com/microprediction/precise/blob/main/precise/skaters/covarianceutil/datacovfunctions.py) take data and produce covariance functions.
107
+
4. The [/covariance/datafunctions](https://github.com/microprediction/precise/blob/main/precise/skaters/covarianceutil/datafunctions.py) take data and produce covariance functions.
112
108
5. The [/covariance/covfunctions](https://github.com/microprediction/precise/blob/main/precise/skaters/covarianceutil/covfunctions.pyy) manipulate 2d cov arrays.
113
-
114
109
115
-
## Miscellaneous
110
+
## Portfolios, ensembles & mixture of experts
111
+
Too fluid to document currently. See the portfolio directories in [skaters](https://github.com/microprediction/precise/tree/main/precise/skaters).
112
+
113
+
## Miscellaneous remarks
116
114
117
115
- Here is some related, and potentially related, [literature](https://github.com/microprediction/precise/blob/main/LITERATURE.md).
118
116
- This is a piece of the microprediction project, should you ever care to [cite](https://github.com/microprediction/microprediction/blob/master/CITE.md) the same. The uses include mixtures of experts models for time-series analysis, buried in [timemachines](https://github.com/microprediction/timemachines/tree/main/timemachines/skatertools) somewhere.
119
117
- If you just want univariate calculations, and don't want numpy as a dependency, there is [momentum](https://github.com/microprediction/momentum). However if you want univariate forecasts of the variance of something, as distinct from mere online calculations of the same, I would suggest checking the [time-series elo ratings](https://microprediction.github.io/timeseries-elo-ratings/html_leaderboards/special-k_001.html) and the "special" category in particular.
120
118
- The name of this package refers to precision matrices, not numerical precision. This isn't a source of high precision covariance *calculations* per se. The intent is more in forecasting future realized covariance. Perhaps I'll include some more numerically stable methods from [this survey](https://dbs.ifi.uni-heidelberg.de/files/Team/eschubert/publications/SSDBM18-covariance-authorcopy.pdf) to make the name more fitting. Pull requests are welcome!
121
119
- The intent is that methods are parameter free. However some not-quite autonomous methods admit a few parameters (the factories). A few might even use just one additional scalar parameter *r* with a space-filling curve convention - somewhat akin to the tuning of skaters explained [here](https://github.com/microprediction/timemachines/tree/main/timemachines/skatertools/tuning) in the timemachines package).
0 commit comments