Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 17 additions & 33 deletions docs/blog/act-via-code.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,32 +5,32 @@ iconType: "solid"
description: "The path to advanced code manipulation agents"
---

<Frame caption="Voyager (Jim Fan)">
<Frame caption="Voyager (2023) solved agentic tasks with code execution">
<img src="/images/nether-portal.png" />
</Frame>


# Act via Code
Two and a half years since the launch of the GPT-3 API, code assistants have emerged as potentially the premier use case of LLMs. The rapid adoption of AI-powered IDEs and prototype builders isn't surprising — code is structured, deterministic, and rich with patterns, making it an ideal domain for machine learning. Experienced developers working with tools like Cursor (myself included) can tell that the field of software engineering is about to go through rapid change.

Two and a half years since the launch of the GPT-3 API, code assistants have emerged as the most powerful and practically useful applications of LLMs. The rapid adoption of AI-powered IDEs and prototype builders isn't surprising — code is structured, deterministic, and rich with patterns, making it an ideal domain for machine learning. As model capabilities continue to scale, we're seeing compounding improvements in code understanding and generation.

Yet there's a striking gap between what AI agents can understand and what they can actually do. While they can reason about complex architectural changes, debug intricate issues, and propose sophisticated refactors, they often can't execute these ideas. The ceiling isn't intelligence or context—it's the ability to manipulate code at scale. Large-scale modifications remain unreliable or impossible, not because agents don't understand what to do, but because they lack the right interfaces to do it.
Yet there's a striking gap between understanding and action. Today's AI agents can analyze enterprise codebases and propose sophisticated improvements—eliminating tech debt, untangling dependencies, improving modularity. But ask them to actually implement these changes across millions of lines of code, and they hit a wall. Their ceiling isn't intelligence—it's the ability to safely and reliably execute large-scale modifications on real, enterprise codebases.

The bottleneck isn't intelligence — it's tooling. By giving AI models the ability to write and execute code that modifies code, we're about to unlock an entire class of tasks that agents already understand but can't yet perform. Code execution environments represent the most expressive tool we could offer an agent—enabling composition, abstraction, and systematic manipulation of complex systems. When paired with ever-improving language models, this will unlock another step function improvement in AI capabilities.

## Beating Minecraft with Code Execution

In mid-2023, a research project called [Voyager](https://voyager.minedojo.org) made waves: it effectively solved Minecraft, performing several multiples better than the prior SOTA on many important dimensions. This was a massive breakthrough previous reinforcement learning systems had struggled for years with even basic Minecraft tasks.
In mid-2023, a research project called [Voyager](https://voyager.minedojo.org) made waves: it effectively solved Minecraft, performing several multiples better than the prior SOTA. This was a massive breakthrough as previous reinforcement learning systems had struggled for years with even basic Minecraft tasks.

While the AI community was focused on scaling intelligence, Voyager demonstrated something more fundamental: the right tools can unlock entirely new tiers of capability. The same GPT-4 model that struggled with Minecraft using traditional frameworks achieved remarkable results when allowed to write and execute code. This wasn't about raw intelligence—it was about giving the agent a more expressive way to act.
While the AI community was focused on scaling intelligence, Voyager demonstrated something more fundamental: the right tools can unlock entirely new tiers of capability. The same GPT-4 model that struggled with Minecraft using standard agentic frameworks (like [ReAct](https://klu.ai/glossary/react-agent-model)) achieved remarkable results when allowed to write and execute code. This wasn't about raw intelligence—it was about giving the agent a more expressive way to act.

<Frame>
<img src="/images/voyager-performance.png" />
</Frame>

The breakthrough came from a simple yet powerful insight: let the AI write code. Instead of limiting the agent to primitive "tools," Voyager allowed GPT-4 to write and execute [JS programs](https://github.com/MineDojo/Voyager/tree/main/skill_library/trial2/skill/code) that controlled Minecraft actions through a clean API:
The breakthrough came from a simple yet powerful insight: let the AI write code. Instead of limiting the agent to primitive "tools," Voyager allowed GPT-4 to write and execute [JS programs](https://github.com/MineDojo/Voyager/tree/main/skill_library/trial2/skill/code) that controlled Minecraft actions through a clean API.

```javascript
// Example "action program" from Voyager, 2023
// written by gpt-4
async function chopSpruceLogs(bot) {
const spruceLogCount = bot.inventory.count(mcData.itemsByName.spruce_log.id);
const logsToMine = 3 - spruceLogCount;
Expand All @@ -44,7 +44,7 @@ async function chopSpruceLogs(bot) {
}
```

This approach transformed the agent's capabilities. Rather than being constrained to atomic actions like `equipItem(...)`, it could create higher-level operations like [`craftShieldWithFurnace()`](https://github.com/MineDojo/Voyager/blob/main/skill_library/trial2/skill/code/craftShieldWithFurnace.js) through composing JS APIs. The system also implemented a memory mechanism, storing successful programs for reuse in similar situations—effectively building its own library of proven solutions it could later refer to and adapt to similar circumstances.
This approach transformed the agent's capabilities. Rather than being constrained to atomic actions like `equipItem(...)`, it could create higher-level operations like [`craftShieldWithFurnace()`](https://github.com/MineDojo/Voyager/blob/main/skill_library/trial2/skill/code/craftShieldWithFurnace.js) through composing JS APIs. Furthermore, Wang et al. implemented a memory mechanism, in which successful "action programs" could later be recalled, copied, and built upon, effectively enabling the agent to accumulate experience.

<Frame>
<img src="/images/voyager-retrieval.png" />
Expand All @@ -56,23 +56,21 @@ As the Voyager authors noted:

## Code is an Ideal Action Space

The implications of code as an action space extend far beyond gaming. Code provides a uniquely powerful interface between AI and real-world systems. When an agent writes code, it gains several critical advantages over traditional atomic tools.
The implications of code as an action space extend far beyond gaming. This architectural insight — letting AI act through code rather than atomic commands — will lead to a step change in the capabilities of AI systems. Nowhere is this more apparent than in software engineering, where agents already understand complex transformations but lack the tools to execute them effectively.

When an agent writes code, it gains several critical advantages over traditional atomic tools:

### Code is Composable
Code is the ultimate composable medium. Agents can build their own tools by combining simpler operations, wrapping any function as a building block for more complex behaviors. This aligns well with what is perhaps LLMs' premier capability: understanding and interpolating between examples to create new solutions.
- **Composability**: Agents can build their own tools by combining simpler operations. This aligns perfectly with LLMs' demonstrated ability to compose and interpolate between examples to create novel solutions.

### Code Constrains the Action Space
APIs can enforce guardrails that keep agents on track. By designing interfaces that make invalid operations impossible to express, we can prevent entire classes of errors before they happen. The type system becomes a powerful tool for shaping agent behavior.
- **Constrained Action Space**: Well-designed APIs act as guardrails, making invalid operations impossible to express. The type system becomes a powerful tool for preventing entire classes of errors before they happen.

### Code Provides Objective Feedback
Code execution gives immediate, unambiguous feedback. When something goes wrong, you get stack traces and error messages—not just a confidence score. This concrete error signal is invaluable for agents learning to navigate complex systems.
- **Objective Feedback**: Code execution provides immediate, unambiguous feedback through stack traces and error messages—not just confidence scores. This concrete error signal is invaluable for learning.

### Code is a Natural Medium for Collaboration
Programs are a shared language between humans and agents. Code explicitly encodes reasoning in a reviewable format, making agent actions transparent and debuggable. There's no magic—just deterministic execution that can be understood, modified, and improved by both humans and AI.
- **Natural Collaboration**: Programs are a shared language between humans and agents. Code explicitly encodes reasoning in a reviewable format, making actions transparent, debuggable, and easily re-runnable.

## For Software Engineering

This brings us to software engineering, where we see a massive gap between AI's theoretical capabilities and practical achievements. Many code modification tasks are fundamentally programmatic—dependency analysis, refactors, control flow analysis—yet we lack the tools to express them properly.
Software engineering tasks are inherently programmatic and graph-based — dependency analysis, refactors, control flow analysis, etc. Yet today's AI agents interface with code primarily through string manipulation, missing the rich structure that developers and their tools rely on. By giving agents APIs that operate on the codebase's underlying graph structure rather than raw text, we can unlock a new tier of capabilities. Imagine agents that can rapidly traverse dependency trees, analyze control flow, and perform complex refactors while maintaining perfect awareness of the codebase's structure.

Consider how a developer thinks about refactoring: it's rarely about direct text manipulation. Instead, we think in terms of high-level operations: "move this function," "rename this variable everywhere," "split this module." These operations can be encoded into a powerful Python API:

Expand All @@ -84,17 +82,3 @@ for component in codebase.jsx_components:
# powerful edit APIs that handle edge cases
component.rename(component.name + 'Page')
```

This isn't just another code manipulation library—it's a scriptable language server that builds on proven foundations like LSP and codemods, but designed specifically for programmatic analysis and refactoring.

## What does this look like?

At Codegen, we've built exactly this system. Our approach centers on four key principles:

The foundation must be Python, enabling easy composition with existing tools and workflows. Operations must be in-memory for performance, handling large-scale changes efficiently. The system must be open source, allowing developers and AI researchers to extend and enhance it. And perhaps most importantly, it must be thoroughly documented—not just for humans, but for the next generation of AI agents that will build upon it.

## What does this enable?

We've already used this approach to merge hundreds of thousands of lines of code in enterprise codebases. Our tools have automated complex tasks like feature flag deletion, test suite reorganization, import cycle elimination, and dead code removal. But more importantly, we've proven that code-as-action-space isn't just theoretical—it's a practical approach to scaling software engineering.

This is just the beginning. With Codegen, we're providing the foundation for the next generation of code manipulation tools—built for both human developers and AI agents. We believe this approach will fundamentally change how we think about and implement large-scale code changes, making previously impossible tasks not just possible, but routine.
5 changes: 5 additions & 0 deletions docs/blog/codemod-frameworks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,11 @@ icon: "code-compare"
iconType: "solid"
---

# Others to add
- [Abracadabra](https://github.com/nicoespeon/abracadabra)
- [Rope](https://rope.readthedocs.io/en/latest/overview.html#rope-overview)
- [Grit](https://github.com/getgrit/gritql)

Code transformation tools have evolved significantly over the years, each offering unique approaches to programmatic code manipulation. Let's explore the strengths and limitations of major frameworks in this space.

## Python's AST Module
Expand Down
29 changes: 3 additions & 26 deletions docs/blog/posts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,39 +4,16 @@ icon: "clock"
iconType: "solid"
---

<Update
label="2024-01-24"
description="Static Analysis in the Age of AI Coding Assistants"
title="Static Analysis in the Age of AI Coding Assistants"
>
## Static Analysis in the Age of AI Coding Assistants

Why traditional language servers aren't enough for the future of AI-powered code manipulation

</Update>

<Update
label="2024-01-24"
description="A Deep Dive into Codemod Frameworks"
title="Codemod Frameworks"
href="/blog/codemod-frameworks"
>
## Codemod Frameworks

Comparing popular tools for programmatic code transformation

</Update>

<Update label="2024-01-24" description="Acting via Code">

## Act via Code

Programs are the natural convergence of LLMs and traditional computation.
Why code as an action space will lead to a step function improvement in agent capabilities.

<Card
img="/images/voyager.png"
img="/images/nether-portal.png"
title="Act via Code"
href="https://codegen.com"
href="/blog/act-via-code"
/>

</Update>
Binary file modified docs/images/nether-portal.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
28 changes: 14 additions & 14 deletions docs/introduction/getting-started.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,19 +9,18 @@ Follow our step-by-step tutorial to get up and running with Codegen.

## Installation

We recommend using [`pipx`](https://pypa.github.io/pipx/) to install Codegen globally. `pipx` is a tool to help you install and run Python applications in isolated environments. This isolation ensures that Codegen's dependencies won't conflict with your other Python projects.
We recommend using [uv](https://github.com/astral-sh/uv) to install Codegen's CLI globally.

First, install `pipx` if you haven't already:
First, install `uv` if you haven't already:

```bash
brew install pipx
pipx ensurepath # Ensure pipx binaries are on your PATH
curl -LsSf https://astral.sh/uv/install.sh | sh
```

Then install Codegen globally:
Then install Codegen:

```bash
pipx install codegen
uv tool install codegen
```

<Note>
Expand All @@ -31,7 +30,8 @@ pipx install codegen

## Quick Start with Jupyter

The fastest way to explore a codebase is using Jupyter. Codegen provides a built-in command to set this up:
The [codgen notebook](/cli/notebook) command creates a virtual environment and opens a jupyter notebook for quick prototyping. This is often the fastest way to get up and running.


```bash
# Navigate to your repository
Expand All @@ -57,7 +57,7 @@ This will:

## Initializing a Codebase

Instantiating a [`Codebase`](/api-reference/core/Codebase) will automatically parse a codebase and make it available for manipulation.
Instantiating a [Codebase](/api-reference/core/Codebase) will automatically parse a codebase and make it available for manipulation.

```python
from codegen import Codebase
Expand All @@ -83,10 +83,10 @@ Let's explore the codebase we just initialized.

Here are some common patterns for code navigation in Codegen:

- Iterate over all [`Functions`](/api-reference/core/Function) with [`Codebase.functions`](/api-reference/core/Codebase#functions)
- View class inheritance with [`Class.superclasses`](/api-reference/core/Class#superclasses)
- View function call-sites with [`Function.call_sites`](/api-reference/core/Function#call-sites)
- View function usages with [`Function.usages`](/api-reference/core/Function#usages)
- Iterate over all [Functions](/api-reference/core/Function) with [Codebase.functions](/api-reference/core/Codebase#functions)
- View class inheritance with [Class.superclasses](/api-reference/core/Class#superclasses)
- View function call-sites with [Function.call_sites](/api-reference/core/Function#call-sites)
- View function usages with [Function.usages](/api-reference/core/Function#usages)

```python
# Print overall stats
Expand Down Expand Up @@ -175,8 +175,8 @@ codebase.commit()

<Warning>
In order to commit changes to your filesystem, you must call
[`Codebase.commit()`](/api-reference/core/Codebase#commit). Learn more about
[Commit and Reset](/building-with-codegen/commit-and-reset).
[codebase.commit()](/api-reference/core/Codebase#commit). Learn more about
[commit() and reset()](/building-with-codegen/commit-and-reset).
</Warning>

### Finding Specific Content
Expand Down
2 changes: 1 addition & 1 deletion docs/introduction/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ function.set_docstring('new docstring') # set docstring
## Installation

```bash
pip install codegen
uv tool install codegen
```

## Get Started
Expand Down
Loading
Loading