Skip to content
View Peiyang-Song's full-sized avatar

Highlights

  • Pro

Organizations

@ucsb @lean-dojo @psychology-of-AI

Block or report Peiyang-Song

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Peiyang-Song/README.md

Welcome to my Github 👋

I am Peiyang Song, a senior undergraduate majoring in Computer Science at California Institute of Technology (Caltech), with a minor in Robotics.

I build agentic reasoning systems: AI agents that can plan, act, verify, adapt, and improve over time. My work centers on making reasoning both capable and dependable, across formally verified environments and open-ended natural language. A recurring theme is leveraging structure—often neuro-symbolic—to provide control, interpretability, and reliability in increasingly autonomous systems.

  • (1) Advancing formal reasoning from static models to adaptive agents. In theorem proving, correctness is verifiable—but effective agency is hard. My work advances formal reasoning systems by enabling learning through environment and data infrastructures [LeanDojo], bringing learned capabilities back into the prover via neural-assisted automation [Lean Copilot] and human-centered tooling [Human-AI Formalization], and more recently moving beyond static prediction toward adaptive agents [Adaptation] that operate over evolving libraries [LeanAgent] and long-horizon contexts [LeanProgress].

  • (2) Diagnosing and strengthening reasoning in natural language. Outside formal systems, correctness guarantees disappear—so reliability must come from principled diagnosis. My work takes a human-grounded, failure-driven approach: connecting classic cognitive phenomena to modern LLM behavior [A-Not-B], examining context sensitivity and cultural fairness in language [Idioms], extending from individual failure modes to questioning foundational assumptions in behavioral evaluation [Personality Illusion] and developing a systematic framework for understanding and mitigating LLM reasoning failures [Reasoning Failures].

  • (3) Enhancing the efficiency of reasoning systems. I complement algorithmic advances with a system-level perspective, developing architectural and temporal arithmetic techniques for efficient computation [Delay Space] [DelayNet].

My long-term goal is to build AI systems whose reasoning is as creative as human intuition and as dependable as formal logic.

You can find more about me and my work from my Personal Website and Google Scholar Page.

I'm always open to collaborations. Please feel free to email me at psong@caltech.edu.

[Last Updated: Feb. 2026]

Pinned Loading

  1. lean-dojo/LeanCopilot lean-dojo/LeanCopilot Public

    LLMs as Copilots for Theorem Proving in Lean

    C++ 1.2k 122

  2. lean-dojo/LeanDojo lean-dojo/LeanDojo Public

    Tool for data extraction and interacting with Lean programmatically.

    Python 777 116

  3. lean-dojo/ReProver lean-dojo/ReProver Public

    Retrieval-Augmented Theorem Provers for Lean

    Python 318 68

  4. Awesome-LLM-Reasoning-Failures Awesome-LLM-Reasoning-Failures Public

    Repo for "Large Language Model Reasoning Failures"

    163 13

  5. pat-jj/Awesome-Adaptation-of-Agentic-AI pat-jj/Awesome-Adaptation-of-Agentic-AI Public

    Repo for "Adaptation of Agentic AI"

    603 51

  6. psychology-of-AI/Personality-Illusion psychology-of-AI/Personality-Illusion Public

    The Personality Illusion: Revealing Dissociation Between Self-Reports & Behavior in LLMs.

    Jupyter Notebook 100 6