Skip to content

Commit 9e6dbce

Browse files
committed
minor updates
1 parent ba2b35d commit 9e6dbce

File tree

3 files changed

+19
-36
lines changed

3 files changed

+19
-36
lines changed

index.md

Lines changed: 17 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,20 @@ layout: default
77

88
# Softwares
99

10-
![image](./assets/points.svg)
1110

11+
## FastGPs
12+
13+
```
14+
pip install fastgps
15+
```
16+
17+
Gaussian process regression (GPR) models typically require $\mathcal{O}(n^2)$ storage and $\mathcal{O}(n^3)$ computations. [FastGPs](https://alegresor.github.io/fastgps) implements GPR which requires only $\mathcal{O}(n)$ storage and $\mathcal{O}(n \log n)$ computations by pairing certain quasi-random sampling locations with matching kernels to yield structured Gram matrices. We support
18+
- GPU scaling,
19+
- batched inference,
20+
- robust hyperparameter optimization, and
21+
- multi-task GPR.
22+
23+
![image](./assets/2d_gp.svg)
1224

1325
## QMCPy
1426

@@ -18,10 +30,10 @@ pip install qmcpy
1830

1931
[QMCPy](https://qmcsoftware.github.io/QMCSoftware/) is a Python package for Quasi-Monte Carlo (QMC) which contains
2032
- quasi-random (low discrepancy) sequence generators and randomization routines, including
21-
- **lattices** with
33+
- *lattices* with
2234
- extensible constructions
2335
- random shifts
24-
- **digital nets** (e.g. Sobol' points) with
36+
- *digital nets* (e.g. Sobol' points) with
2537
- extensible constructions
2638
- random digital shifts,
2739
- linear matrix scrambling,
@@ -36,31 +48,17 @@ pip install qmcpy
3648
- a suite of diverse use cases, and
3749
- automatic variable transforms.
3850

39-
51+
![image](./assets/points.svg)
4052

4153
![image](./assets/mc_vs_qmc.svg)
4254

43-
## FastGPs
44-
45-
```
46-
pip install fastgps
47-
```
48-
49-
Gaussian process regression (GPR) models typically require $\mathcal{O}(n^2)$ storage and $\mathcal{O}(n^3)$ computations. [FastGPs](https://alegresor.github.io/fastgps) implements GPR which requires only $\mathcal{O}(n)$ storage and $\mathcal{O}(n \log n)$ computations by pairing certain quasi-random sampling locations with matching kernels to yield structured Gram matrices. We support
50-
- GPU scaling,
51-
- batched inference,
52-
- robust hyperparameter optimization, and
53-
- multi-task GPR.
54-
55-
![image](./assets/2d_gp.svg)
56-
5755
## QMCGenerators.jl
5856

5957
```
6058
] add QMCGenerators
6159
```
6260

63-
[QMCGenerators.jl](https://alegresor.github.io/QMCGenerators.jl/) is a Julia package includes routines to generate and randomize quasi-random sequences used in Quasi-Monte Carlo. Supports the suite of low discrepancy sequence generators and randomization routines available in [QMCPy](https://qmcsoftware.github.io/QMCSoftware/). This package is a translation and enhancement of Dirk Nuyens' [Magic Point Shop](https://people.cs.kuleuven.be/~dirk.nuyens/qmc-generators/).
61+
[QMCGenerators.jl](https://alegresor.github.io/QMCGenerators.jl/) is a Julia package which includes routines to generate and randomize quasi-random sequences used in Quasi-Monte Carlo. This supports the suite of low discrepancy sequence generators and randomization routines available in [QMCPy](https://qmcsoftware.github.io/QMCSoftware/), see the description above. This package is a translation and enhancement of Dirk Nuyens' [Magic Point Shop](https://people.cs.kuleuven.be/~dirk.nuyens/qmc-generators/).
6462

6563

6664

resume/sorokin_resume.pdf

0 Bytes
Binary file not shown.

resume/sorokin_resume.tex

Lines changed: 2 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -26,21 +26,6 @@
2626
\social[github]{alegresor}
2727
\social[googlescholar]{akk3XSEAAAAJ}
2828

29-
% \newcommand{\itlink}[2]{\textcolor{linkColor}{\textit{\link[#1]{#2}}}}
30-
% \newcommand{\bfitlink}[2]{\textcolor{linkColor}{\textbf{\textit{\link[#1]{#2}}}}}
31-
% \newcommand{\hpref}[2]{\hyperref[#1]{\textcolor{linkColor}{\textit{#2}}}}
32-
33-
34-
% \newcommand*{\newentry}[3][.25em]{%
35-
% \begin{tabular}[t]{@{}p{0.125\textwidth}@{\hspace{\separatorcolumnwidth}}}%
36-
% \raggedleft\hintstyle{#2}%
37-
% \end{tabular}%
38-
% \begin{tabular}[t]{@{}p{\dimexpr0.875\textwidth-\separatorcolumnwidth}@{}}
39-
% #3
40-
% \end{tabular}%
41-
% \par\addvspace{#1}}
42-
43-
4429
\usepackage[
4530
backend=biber,
4631
%defernumbers=true,
@@ -74,8 +59,8 @@ \subsection{Education}
7459
\newentry{\normalfont{2017 - 2021}}{\textbf{B.S. in Applied Math, Minor in Computer Science.} IIT. Summa Cum Laude. GPA $3.94 / 4$.}
7560

7661
\subsection{Experiences}
77-
\newentry{\normalfont{Jan - Dec 2025}}{\textbf{DOE SCGSR Fellow in Applied Mathematics} at \textbf{Sandia National Laboratory} in Livermore, CA. I am researching Gaussian process based scientific ML models for machine precision operator learning. I am also developing fast, scalable multi-task Gaussian processes for multi-fidelity modeling. We are preparing publications and open-source software with scalable GPU support e.g. see \texttt{FastGPs} below.}
78-
\newentry{\normalfont{Summer 2024}}{\textbf{Scientific Machine Learning Researcher} at \textbf{FM (Factory Mutual Insurance Company).} I built SciML models, including Physics Informed Neural Networks (PINNs) and Deep Operator Networks (DeepONets), for solving Radiative Transport Equations (RTEs) used to speed up CFD fire dynamics simulations. Resulted in publication of \citetitle{sorokin.RTE_DeepONet}.}
62+
\newentry{\normalfont{Jan - Dec 2025}}{\textbf{DOE SCGSR Fellow in Applied Mathematics} at \textbf{Sandia National Laboratory} in Livermore, CA. I am researching Gaussian process based scientific ML models for machine precision PDE solutions. I am also developing fast, scalable multi-task Gaussian processes for multi-fidelity modeling. We are preparing publications and open-source software with HPC support such as \texttt{FastGPs} below.}
63+
\newentry{\normalfont{Summer 2024}}{\textbf{Scientific Machine Learning Researcher} at \textbf{FM (Factory Mutual Insurance Company).} I built scientific ML models, including Physics Informed Neural Networks (PINNs) and Deep Operator Networks (DeepONets), for solving Radiative Transport Equations (RTEs) used to speed up CFD fire dynamics simulations. Resulted in publication of \citetitle{sorokin.RTE_DeepONet}.}
7964
\newentry{\normalfont{Summer 2023}}{\textbf{Graduate Intern} at \textbf{Los Alamos National Laboratory.} I modeled the solution processes of PDEs with random coefficients using efficient and error aware Gaussian processes. Resulted in publication of \citetitle{sorokin.gp4darcy}.}
8065
\newentry{\normalfont{Summer 2022}}{\textbf{Givens Associate Intern} at \textbf{Argonne National Laboratory}. I researched methods to efficiently estimate failure probability using Monte Carlo with non-parametric importance sampling. Resulted in publication of \citetitle{sorokin.adaptive_prob_failure_GP}.}
8166
\newentry{\normalfont{Summer 2021}}{\textbf{ML Engineer Intern} at \textbf{SigOpt, an Intel Company}. I developed novel meta-learning techniques for model-aware hyperparameter tuning via Bayesian optimization. In a six person ML engineering team, I contributed production code and learned key elements of the AWS stack. Resulted in publication of \citetitle{sorokin.sigopt_mulch}.}

0 commit comments

Comments
 (0)