You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am a PhD Candidate at New Mexico State University specializing in High-Performance Computing (HPC) and Deep Learning, with deep expertise in GPU-accelerated computing using C++/CUDA and Python. My work involves developing and optimizing software for large-scale data analysis and scientific simulation on supercomputing clusters.
205
+
I’m a theoretical particle physics PhD candidate at New Mexico State University, specializing in the application of GPU-accelerated high-performance computing (HPC) and machine learning to fundamental physics problems. My work focuses on studying the intrinsic motion of quarks and gluons and exploring Beyond Standard Model (BSM) physics through large-scale simulation and data analysis.
206
206
</p>
207
-
208
-
<pclass="a">
209
-
My doctoral research focuses on applying advanced computational techniques to solve complex problems in particle physics. Key technical achievements include:
210
-
<ul>
211
-
<li><b>Performance Engineering & Acceleration:</b> I developed GPU-accelerated CUDA C++ pipelines that reduced data processing time by <strong>10x</strong> for multi-terabyte Fourier transforms on HPC clusters. As part of my collaboration with <strong>Los Alamos National Laboratory</strong>, I design and optimize parallelized C++ CUDA kernels that significantly accelerate multi-terabyte Monte Carlo simulations on clusters like the NERSC Perlmutter.</li>
212
-
<li><b>End-to-End Machine Learning Pipelines:</b> I built a complete ML pipeline to process over 30,000 multi-dimensional observables from simulations. Using symbolic regression (PySR) with custom, physics-constrained loss functions, this pipeline achieved a model fit accuracy of over <strong>93%</strong>.</li>
213
-
<li><b>Large-Scale System Orchestration:</b> I have designed and deployed custom SLURM workflows to manage and execute over <strong>75,000 CPU/GPU compute hours</strong>, enabling robust, automated parallel analysis for large-scale jobs.</li>
214
-
<li><b>Robust Software Development & Validation:</b> I created production-grade Python and Mathematica packages to ensure numerical stability and reproducibility in multi-stage data analysis workflows. To increase model reliability, I apply rigorous validation methods, including AIC-based selection, chi-squared minimization, and bootstrap/jackknife resampling across tens of thousands of correlated data points.</li>
215
-
</ul>
207
+
208
+
<pclass = "a">
209
+
My PhD research under <ahref="https://phys.nmsu.edu/facultydirectory/engelhardt_michael.html" target="_blank">Dr. Michael Engelhardt</a> (NMSU) focuses on lattice quantum chromodynamics (QCD) calculations of Transverse Momentum Dependent Parton Distribution Functions (TMDs). To achieve this, I built an end-to-end machine learning pipeline to process over 30,000 multidimensional observables from Monte Carlo simulations, achieving <strong>93%+ model fit accuracy</strong> using symbolic regression (PySR) with physics-constrained loss functions. To handle the multi-terabyte datasets generated, I developed GPU-accelerated CUDA C++ (cuFFT) pipelines, which <strong>reduced data processing time by 10x</strong> on HPC clusters. For robust analysis, I also created production-grade Python and Mathematica packages to manage jackknife resampling and ensure numerical stability.
216
210
</p>
217
211
218
212
219
213
220
214
221
215
222
-
<pclass="a">
223
-
In addition to my core research, I actively pursue independent projects to broaden my technical skill set. I built a <ahref="[Your Live App Link]" target="_blank">cloud-hosted ML forecasting platform</a>using AWS/Azure, Flask, and React, featuring automated data pipelines and a GPT API integration for generating natural-language summaries. I have also implemented neural networks from scratch in NumPy to solidify my understanding of fundamental ML principles.
216
+
<pclass = "a">
217
+
I am also collaborating with <ahref="https://cnls.lanl.gov/~rajan/" target="_blank">Dr. Rajan Gupta</a> and <ahref="https://sites.santafe.edu/~tanmoy/cv.html" target="_blank">Dr. Tanmoy Bhattacharya</a>from <strong>Los Alamos National Laboratory</strong> on calculating the hadronic matrix elements needed to connect nucleon Electric Dipole Moments (EDMs) to BSM physics. In this role, I develop and optimize parallelized C++ CUDA kernels to accelerate multi-terabyte calculations, achieving significant runtime reductions on GPU-accelerated HPC clusters like the NERSC Perlmutter. I design and deploy custom SLURM workflows to manage and execute over <strong>75,000 CPU/GPU compute hours</strong>, enabling robust, automated parallel analysis for these large-scale simulations.
224
218
</p>
225
219
226
220
227
-
221
+
<pclass = "a">
222
+
In addition, I am collaborating with <ahref="https://physics.sciences.ncsu.edu/people/crji/" target="_blank">Dr. Chueng-Ryong Ji</a> (<strong>North Carolina State University</strong>) on a project interpolating the manifestly covariant conformal group (SO(4,2)) between different forms of relativistic dynamics. For this work, I implement and manage Mathematica symbolic computation workflows on HPC clusters (NERSC Perlmutter) to analyze complex algebraic structures and symmetry constraints inherent in the problem.
223
+
</p>
228
224
229
225
<pclass="a">
230
-
With a strong foundation in C++, Python, CUDA, and parallel computing, complemented by hands-on experience in MLOps, cloud technologies, and rigorous data analysis, I am passionate about building efficient, scalable, and impactful software solutions for data-intensive challenges.
226
+
My background provides me with a unique blend of deep physics intuition and hands-on expertise in C++/CUDA, parallel computing, and machine learning. I am driven to apply these skills to solve complex, data-intensive challenges and contribute to cutting-edge scientific and technical advancements.
0 commit comments