AI Researcher · Cognitive Intelligence · AI Safety
Designing transparent, safe, and cognitively grounded reasoning systems.
I work at the intersection of cognitive science, artificial intelligence, and AI safety, with an emphasis on building systems that can reason, explain, and align with human values.
- 🧠 Cognitive Intelligence Systems (ACT-R / SOAR-inspired reasoning)
- 🧩 AI Safety & Interpretability (chain-of-thought monitoring, alignment)
- ⚙️ High-Performance AI Systems (CUDA, scalable learning & simulation)
- 🤝 Human–AI Reasoning Transparency
A non-clinical cognitive AI system designed to:
- Infer intent, emotion, and cognitive load
- Reason using cognitive architectures
- Provide transparent recommendations (music, reading, reflection)
🔗 Repository: work in progress
📄 Research notes: in preparation
A technical AI safety framework to:
- Inspect internal reasoning traces of LLMs
- Detect hallucinations and unsafe reasoning paths
- Improve interpretability and auditability
🔗 Repository: in preparation
- Cognitive AI & Reasoning Systems
- Technical AI Safety & Alignment
- Mechanistic Interpretability
- High-Performance Computing for AI
- Human–AI Interaction & Transparency
- AI for Mental Health & Education
Languages & Frameworks
Python · PyTorch · TensorFlow · Hugging Face · CUDA
Systems & Libraries
Linux · OpenCV · NumPy · Pandas · Scikit-learn
- 🔗 LinkedIn: https://linkedin.com/in/sumit45
- 📧 Email: [email protected]
Building AI systems that reason, explain, and align.