███╗ ██╗ █████╗ ██╗████████╗██╗██╗ ██╗
████╗ ██║██╔══██╗██║╚══██╔══╝██║██║ ██╔╝
██╔██╗ ██║███████║██║ ██║ ██║█████╔╝
██║╚██╗██║██╔══██║██║ ██║ ██║██╔═██╗
██║ ╚████║██║ ██║██║ ██║ ██║██║ ██╗
╚═╝ ╚═══╝╚═╝ ╚═╝╚═╝ ╚═╝ ╚═╝╚═╝ ╚═╝
desenyon — naitik gupta
junior in high school. still cooking.
I write code that is either (a) i just wanted to try something cool out, (b) infrastructure I was tired of not having, or (c) both. The projects below are some of my best :D. I work across AI tooling, quantitative finance, and applied ML research where I try not to ship anything I'd be embarrassed to explain in detail.
infinitecontex Scans your repository and generates a compact context file so any AI tool can immediately understand the project structure, dependencies, and intent. Eliminates the 10-minute re-explanation at the start of every AI-assisted session.
converge Repository intelligence platform that mathematically proves and automatically repairs Python dependency topologies using graph algorithms. If your environment is broken in a non-obvious way, it finds the cycle or conflict, proves it, and fixes it.
And A LOT more -- go look at my repos tab
GRAFT-Net Graph-Routed Adaptive Fusion Transformer Network. Architecture research combining graph routing with adaptive attention fusion.
ampp Autonomous theorem-proving system with machine-checked guarantees, targeting advanced combinatorics, number theory, and Erdos-style problems. It produces formally verified proofs.
aurane ML-oriented DSL that compiles down to idiomatic Python and PyTorch. The goal was a language where the semantics match how researchers actually think about tensor operations and training loops, rather than fighting framework boilerplate.
updraft-lm
117M parameter transformer built from scratch, including the math. Built to actually understand what GPT-style models are doing, not just call .from_pretrained().
scope-rx Neural network interpretability library covering attribution methods, evaluation metrics, and visualization. Built for researchers who need to explain model predictions, not just inspect activations.
ALCAS Attention-guided antibody sequence landscape characterization using transformers, Pareto optimization, and uncertainty quantification. Computational biology research targeting enzyme design.
sigma AI finance research agent. Ask a question about a market, ticker, or strategy in natural language — get back analysis, charts, and backtests. Runs natively on macOS.
flux-rx Financial analysis and visualization library. Generates publication-quality interactive charts and dashboards for any stock, ETF, or index. Designed for researchers who want to skip the data plumbing and get to the analysis.
quantitative trading research Systematic strategy research on QuantConnect: volatility-targeted momentum, regime-adaptive allocation, leveraged ETF rotation (TQQQ / UPRO / SOXL). Placed in Open League.
PressBench Benchmark measuring how LLM calibration degrades under adversarial social pressure — specifically, does a model change a correct answer when a user pushes back confidently? 498 questions across 6 domains, 3 difficulty tiers. Novel eval, no prior work does this directly.
- SWE / AI Intern, RagaAI
- 1st Place, Synopsys Science Fair — ML category
- California State Science & Engineering Fair qualifier
- Published, Journal of Student Research
- QuantConnect Open League placer (8th)
- Research writing at RandomResearchAI
Python is the primary language for everything research-adjacent. Rust for anything where performance or correctness guarantees matter. TypeScript for tools with a UI layer. I have used C++ for hardware work and am comfortable in it, but it is not a daily driver.
On the ML side: PyTorch and enough linear algebra to know when the math is wrong before the loss diverges (as well as associated libs). For infrastructure: Docker, Linux, FastAPI, SQLite. For embedded: Raspberry Pi Pico, DWM3000 UWB, basic PCB-adjacent work.
still cooking
