Skip to content

Commit ac6954b

Browse files
Update index.html
1 parent b4f4079 commit ac6954b

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

index.html

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -202,19 +202,19 @@ <h1>Hariprashad Ravikumar</h1>
202202
</figure>
203203

204204
<p class="a">
205-
I’m a theoretical particle physics PhD candidate at New Mexico State University, specializing in the application of GPU-accelerated high-performance computing (HPC) and machine learning to fundamental physics problems. My work focuses on studying the intrinsic motion of quarks and gluons and exploring Beyond Standard Model (BSM) physics through large-scale simulation and data analysis.
205+
I’m a theoretical particle physics PhD candidate at New Mexico State University, USA, specializing in the application of GPU-accelerated high-performance computing (HPC) and machine learning to fundamental physics problems. My work focuses on studying the intrinsic motion of quarks and gluons and exploring Beyond Standard Model (BSM) physics through large-scale simulation quantum field theory and symmetries.
206206
</p>
207207

208208
<p class = "a">
209-
My PhD research under <a href="https://phys.nmsu.edu/facultydirectory/engelhardt_michael.html" target="_blank">Dr. Michael Engelhardt</a> (NMSU) focuses on lattice quantum chromodynamics (QCD) calculations of Transverse Momentum Dependent Parton Distribution Functions (TMDs). To achieve this, I built an end-to-end machine learning pipeline to process over 30,000 multidimensional observables from Monte Carlo simulations, achieving <strong>98%+ model fit accuracy</strong> using symbolic regression (PySR) with physics-constrained loss functions. To handle the multi-terabyte datasets generated, I developed GPU-accelerated CUDA C++ (cuFFT) pipelines, which <strong>reduced data processing time by 10x</strong> on HPC clusters. For robust analysis, I also created production-grade Python and Mathematica packages to manage jackknife resampling and ensure numerical stability.
209+
My PhD research under <a href="https://phys.nmsu.edu/facultydirectory/engelhardt_michael.html" target="_blank">Dr. Michael Engelhardt</a> (NMSU) focuses on lattice quantum chromodynamics (QCD) calculations of Transverse Momentum Dependent Parton Distribution Functions (TMDs). To achieve this, I built an end-to-end machine learning pipeline to process over 30,000 multidimensional observables from Monte Carlo simulations, achieving <strong>98%+ model fit accuracy</strong> using symbolic regression (PySR) with physics-constrained loss functions. To handle the multi-terabyte datasets generated, I developed GPU-accelerated CUDA C++ (cuFFT) pipelines, which <strong>reduced data processing time by 10x</strong> on HPC clusters. For robust analysis, I also created production-grade Python, C++, Lua and Mathematica packages to manage jackknife resampling and ensure numerical stability.
210210
</p>
211211

212212

213213

214214

215215

216216
<p class = "a">
217-
I am also collaborating with <a href="https://cnls.lanl.gov/~rajan/" target="_blank">Dr. Rajan Gupta</a> and <a href="https://sites.santafe.edu/~tanmoy/cv.html" target="_blank">Dr. Tanmoy Bhattacharya</a> from <strong>Los Alamos National Laboratory</strong> on calculating the hadronic matrix elements needed to connect nucleon Electric Dipole Moments (EDMs) to BSM physics. In this role, I develop and optimize parallelized C++ CUDA kernels to accelerate multi-terabyte calculations, achieving significant runtime reductions on GPU-accelerated HPC clusters like the NERSC Perlmutter. I design and deploy custom SLURM workflows to manage and execute over <strong>75,000 CPU/GPU compute hours</strong>, enabling robust, automated parallel analysis for these large-scale simulations.
217+
I am also collaborating with <a href="https://cnls.lanl.gov/~rajan/" target="_blank">Dr. Rajan Gupta</a> and <a href="https://sites.santafe.edu/~tanmoy/cv.html" target="_blank">Dr. Tanmoy Bhattacharya</a> from <strong>Los Alamos National Laboratory</strong> on calculating the hadronic matrix elements needed to connect nucleon Electric Dipole Moments (EDMs) to BSM physics. In this role, I develop and optimize parallelized C++ CUDA kernels to accelerate multi-terabyte calculations, achieving significant runtime reductions on GPU-accelerated HPC clusters like the NERSC Perlmutter. I increased model reliability through rigorous statistical validation on over 50,000 correlated data points, applying methods like AIC-based selection and chi-squared minimization with full covariance matrices. I design and deploy custom SLURM workflows to manage and execute over <strong>75,000 CPU/GPU compute hours</strong>, enabling robust, automated parallel analysis for these large-scale simulations.
218218
</p>
219219

220220

0 commit comments

Comments
 (0)