I build security products and infrastructure at the intersection of AI, autonomous systems, and developer experience.
My work focuses on runtime governance, adversarial testing, agent integrity, and the architectural foundations required to secure AI systems in production.
At Cogensec, Iβm building frameworks and tooling for measuring, enforcing, and stress-testing the structural integrity of autonomous agents.
Core thesis: Most AI security today is exogenous, external guardrails wrapped around agents with no security intelligence of their own.
That model breaks as agents become more autonomous.
Weβre building the endogenous alternative, security as a structural property of the agent itself.
- Build security products that increase protection without slowing delivery
- Design runtime governance and integrity systems for AI agents
- Turn complex controls into developer experiences teams actually adopt
- Create frameworks, models, and tooling for AI security evaluation
- Bridge technical depth, product strategy, and execution from idea to launch
|
A formal framework and reference implementation for measuring the structural integrity of autonomous AI agents. Defines three core properties and scores agents across four measurable dimensions. |
βοΈ GideonAutonomous red teaming CLI for AI agents. Won NVIDIAβs developer contest at GTC 2026. Run live on DGX Spark (Grace Blackwell). Built to attack AI agent systems so you know where integrity breaks before adversaries do. |
π‘οΈ BastionMCP security toolkit. Python monorepo, four packages, designed to extend security enforcement to the Model Context Protocol layer and external tool integrations. |
Ten specialized security models, each mapped to a brain region and a distinct security function for autonomous AI agents. Three functional clusters coordinated by the Corpus Callosum inter-module hub. |
|
Runtime governance platform for AI agent deployments. Seven-layer architecture for policy enforcement, behavioral monitoring, and integrity assurance at scale. |
AI security benchmarking and evaluation platform. Implements the Agentegrity scoring methodology. Positioning as the MITRE ATT&CK of AI security. |
|
Defining security product strategy from category creation through enterprise adoption. Zero Trust architecture, platform security, secure-by-design systems, API security, developer security tooling, identity, access, and policy enforcement across Fortune 50 and startup-scale environments. |
Endogenous security architecture for autonomous agents, adversarial robustness, prompt injection and jailbreak resilience, RAG security, memory poisoning defense, behavioral drift detection, multi-agent threat modeling, and physical AI security. |
|
AWS, GCP, and Azure security architecture. Kubernetes, Docker, distributed systems. DevSecOps and CI/CD security practices. Self-service platform design and developer experience for security adoption. |
Product operating models, program and portfolio delivery, cross-functional leadership, large-scale transformation. 30+ products shipped across startup speed and Fortune 50 scale. Translating technical security into business outcomes. |
Developing the theoretical and architectural foundations for AI agent security.
| Title | Description | |
|---|---|---|
| π | The Exogenous-Endogenous Security Distinction | Why all current AI security is architecturally insufficient as agents scale |
| π | The AI Security Market Map Is Wrong | How the industry organizes security by function when it should organize by architecture |
| π | Agentegrity: Structural Integrity for Autonomous AI Agents | The formal framework and manifesto |
| π | The Cortex Series: A Security Nervous System for AI Agents | Neuroscience-inspired specialized security models |
| π° | Zero Day Agent Newsletter | Weekly intelligence on AI agent security |
Member of the OWASP AI Exchange authors group.
| π Range | Deep overlap across security research, product strategy, platform design, and enterprise execution |
| π§ Original Thinking | Developing frameworks and vocabulary for agent integrity, runtime governance, and endogenous AI security |
| π οΈ Builder Mentality | Open-source systems, applied research, and production-minded security tooling |
| π€ Developer Trust | Security products designed for usability, speed, and adoption |
| β‘ Timing | Focused on the next control layer for AI systems as autonomy, tool use, and physical AI become real deployment concerns |
- Agentegrity Framework β structural integrity framework for autonomous AI agents (Coming Soon)
- Gideon β autonomous red teaming CLI for AI agents
- LLM Security Guide β practical guidance for securing LLM systems
- AI Red Teaming Guide β applied adversarial testing patterns for AI systems
|
π’ Senior Leadership, Verizon Led AI infrastructure, 5G edge, zero-trust, and enterprise security programs. $25M+ revenue impact. Trained FBI and CIA analysts in digital forensics and incident response. |
π Three-Time Entrepreneur Cogensec (AI agent security) Β· MuseLytics (AI/ML music analytics) Β· LogistixAI (mobile OCR) |
|
π MS, Computer Information Systems Boston University |
15+ Years in Cybersecurity Enterprise security architecture, adversarial research, AI infrastructure, federal law enforcement training |




