Skip to content
This repository was archived by the owner on May 3, 2024. It is now read-only.

Commit c8a32d0

Browse files
committed
Merge branch 'master' of https://github.com/kvrigor/algosel-rl
2 parents ef671a4 + cdc8962 commit c8a32d0

File tree

1 file changed

+50
-1
lines changed

1 file changed

+50
-1
lines changed

README.md

Lines changed: 50 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,52 @@
11
## algosel-rl
22

3-
code cleanup + documentation in progress
3+
Source code for my Master's dissertation entitled _Algorithm Selection for Subgraph Isomorphism Problems: A Reinforcement Learning Approach_.
4+
5+
### Running the scripts
6+
7+
To start, download and install R (version 3.4.4+) from [CRAN](https://cloud.r-project.org). This installation contains the R interpreter and a simple GUI app for creating R scripts. This is sufficient to run the scripts in this repo; however, if you are planning to debug or modify the scripts, I highly suggest to use a full-featured IDE like [RStudio](https://www.rstudio.com/products/rstudio/download/).
8+
9+
#### Installing prerequisite packages
10+
11+
Run ```source('install_packages.R')``` on R command line to install all the necessary packages.
12+
13+
#### Rendering R Markdown (.Rmd) files
14+
15+
The rendered contents of the .Rmd files can be readily viewed at [RPubs](https://rpubs.com). Check out the following links:
16+
17+
* [ntbk_eda_graphs2015.Rmd](http://rpubs.com/kvrigor/eda_graphs)
18+
* [ntbk_eda_hard.Rmd](http://rpubs.com/kvrigor/eda_graphs_hard)
19+
* [ntbk_asresults_graphs2015.Rmd](http://rpubs.com/kvrigor/asresults_graphs)
20+
* [ntbk_asresults_reinforce.Rmd](http://rpubs.com/kvrigor/asresults_reinforce)
21+
22+
These files can be rendered locally, and the easiest way to do this is through RStudio. Check out this [guide](https://rmarkdown.rstudio.com/articles_intro.html).
23+
24+
### Dissertation paper
25+
26+
The paper was written in [LaTeX](https://en.wikibooks.org/wiki/LaTeX) using [TeXStudio](https://www.texstudio.org) software on Windows. Typesetting files are taken from [utmthesis](https://github.com/utmthesis/utmthesis/releases/tag/v5.1) (v5.1) GitHub repository.
27+
28+
### Useful Links
29+
30+
**R Packages**
31+
* Algorithm Selection Library (aslib). [RDoc](https://www.rdocumentation.org/packages/aslib/versions/0.1) | [GitHub](https://www.rdocumentation.org/packages/aslib/versions/0.1)
32+
* Leveraging Learning to Automatically Manage Algorithms (llama). [RDoc](https://www.rdocumentation.org/packages/llama/versions/0.9.2) | [BitBucket](https://bitbucket.org/lkotthoff/llama)
33+
* R interface to TensorFlow. [link](https://tensorflow.rstudio.com/tensorflow/)
34+
35+
**Recommended Reads**
36+
* Kotthoff, L. (2016). **Algorithm selection for combinatorial search problems: A survey**. In Data Mining and Constraint Programming (pp. 149-190). Springer, Cham. [paper](http://www.aaai.org/ojs/index.php/aimagazine/article/download/2460/2438)
37+
* Kotthoff, L., McCreesh, C., & Solnon, C. (2016, May). **Portfolios of subgraph isomorphism algorithms**. In International Conference on Learning and Intelligent Optimization (pp. 107-122). Springer, Cham. [paper](https://hal.archives-ouvertes.fr/hal-01301829/document)
38+
* Bischl, B., Kerschke, P., Kotthoff, L., Lindauer, M., Malitsky, Y., Fréchette, A., ... & Vanschoren, J. (2016). **Aslib: A benchmark library for algorithm selection**. Artificial Intelligence, 237, 41-58. [paper](https://arxiv.org/pdf/1506.02465)
39+
* Kotthoff, L. (2013). **LLAMA: leveraging learning to automatically manage algorithms**. arXiv preprint arXiv:1306.1031. [paper](https://arxiv.org/pdf/1306.1031)
40+
* Lindauer, M., van Rijn, J. N., & Kotthoff, L. (2017, December). **Open Algorithm Selection Challenge 2017: Setup and Scenarios**. In Open Algorithm Selection Challenge 2017 (pp. 1-7). [paper](http://proceedings.mlr.press/v79/lindauer17a/lindauer17a.pdf)
41+
* Smith-Miles, K. A. (2009). **Cross-disciplinary perspectives on meta-learning for algorithm selection**. ACM Computing Surveys (CSUR), 41(1), 6. [paper](https://www.researchgate.net/profile/Kate_Smith-Miles/publication/220565856_Cross-Disciplinary_Perspectives_on_Meta-Learning_for_Algorithm_Selection/links/57e1f8d208ae1f0b4d93fa7d/Cross-Disciplinary-Perspectives-on-Meta-Learning-for-Algorithm-Selection.pdf)
42+
* Sutton, R. S., & Barto, A. G. (1998). **Introduction to reinforcement learning** (Vol. 135). Cambridge: MIT press. [book](http://incompleteideas.net/book/bookdraft2017nov5.pdf)
43+
* Policy Gradients
44+
* I still find Sutton & Barto's **Introduction to Reinforcement Learning** the easiest to understand regarding this topic. It might be understood better if complemented with readings from other sources. Check these slides:
45+
[1](http://rll.berkeley.edu/deeprlcourse/f17docs/lecture_4_policy_gradient.pdf)
46+
[2](https://www.ias.informatik.tu-darmstadt.de/uploads/Research/MPI2007/MPI2007peters.pdf)
47+
[3](http://www0.cs.ucl.ac.uk/staff/D.Silver/web/Teaching_files/pg.pdf)
48+
49+
**Others**
50+
* ASlib website. [link](http://www.coseal.net/aslib/)
51+
* GRAPHS-2015 dataset source. [GitHub](https://github.com/coseal/aslib_data/tree/master/GRAPHS-2015)
52+
* Reinforcement Learning study plan. [introductory](https://github.com/dennybritz/reinforcement-learning) | [deep RL](https://www.reddit.com/r/reinforcementlearning/comments/6w8kz1/d_study_group_for_deep_rl_policy_gradient_methods/)

0 commit comments

Comments
 (0)