Skip to content

Commit 0b4f010

Browse files
authored
Feat/webui code genesis (#841)
1 parent 42a7d01 commit 0b4f010

File tree

16 files changed

+2856
-849
lines changed

16 files changed

+2856
-849
lines changed

ms_agent/config/config.py

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -201,6 +201,23 @@ def traverse_config(_config: Union[DictConfig, ListConfig, Any],
201201
_config[idx] = extra[value[1:-1]]
202202

203203
traverse_config(config)
204+
205+
for key, value in extra.items():
206+
if '.' in key and not key.startswith('tools.'):
207+
parts = key.split('.')
208+
current = config
209+
# Navigate/create nested structure
210+
for i, part in enumerate(parts[:-1]):
211+
if not hasattr(current,
212+
part) or getattr(current, part) is None:
213+
setattr(current, part, DictConfig({}))
214+
current = getattr(current, part)
215+
final_key = parts[-1]
216+
if not hasattr(current, final_key) or getattr(
217+
current, final_key) is None:
218+
logger.info(f'Adding new config key: {key}')
219+
setattr(current, final_key, value)
220+
204221
return None
205222

206223
@staticmethod

ms_agent/llm/openai_llm.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,9 +42,10 @@ def __init__(
4242
self.model: str = config.llm.model
4343
self.max_continue_runs = getattr(config.llm, 'max_continue_runs',
4444
None) or MAX_CONTINUE_RUNS
45-
base_url = base_url or config.llm.openai_base_url or get_service_config(
46-
'openai').base_url
47-
api_key = api_key or config.llm.openai_api_key
45+
base_url = base_url or getattr(
46+
config.llm, 'openai_base_url',
47+
None) or get_service_config('openai').base_url
48+
api_key = api_key or getattr(config.llm, 'openai_api_key', None)
4849

4950
self.client = openai.OpenAI(
5051
api_key=api_key,

projects/code_genesis/PR_ARTICLE.md

Lines changed: 282 additions & 0 deletions
Large diffs are not rendered by default.

projects/code_genesis/README.md

Lines changed: 51 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,12 @@
1-
# Do a Website!
1+
# Code Genesis
22

3-
This is a development version of code generation. We hope you can play happily with this code. It can do:
3+
An open-source multi-agent framework that generates production-ready software projects from natural language requirements. It can do:
44

5-
* Complex code generation work, especially React frontend and Node.js backend tasks
6-
* A high success rate of generation
7-
* Free development of your own code generation workflows, fitting your scenario
8-
9-
The codebase contains three YAML configuration files:
10-
11-
- **workflow.yaml** - The entry configuration file for code generation; the command line automatically detects this file's existence
12-
- **agent.yaml** - Configuration file used for generating code projects, referenced by workflow.yaml
5+
* End-to-end project generation with frontend, backend, and database integration
6+
* High-quality code with LSP validation and dependency resolution
7+
* Topology-aware code generation that eliminates reference errors
8+
* Automated deployment to EdgeOne Pages
9+
* Flexible workflows: standard (7-stage) or simple (4-stage) pipelines
1310

1411
This project needs to be used together with ms-agent.
1512

@@ -48,45 +45,58 @@ PYTHONPATH=. openai_api_key=your-api-key openai_base_url=your-api-url python ms_
4845

4946
The code will be output to the `output` folder in the current directory by default.
5047

48+
## Configuration for Advanced Features
49+
50+
To enable diff-based editing and automated deployment, configure the following in your YAML files:
51+
52+
### 1. Enable Diff-Based File Editing
53+
54+
Add `edit_file_config` to both [coding.yaml](coding.yaml) and [refine.yaml](refine.yaml):
55+
56+
```yaml
57+
edit_file_config:
58+
model: morph-v3-fast # or other compatible models
59+
api_key: your-api-key
60+
base_url: https://api.morphllm.com/v1
61+
```
62+
63+
Get your model and API key from https://www.morphllm.com
64+
65+
### 2. Enable Automated Deployment
66+
67+
Add `edgeone-pages-mcp` configuration to [refine.yaml](refine.yaml):
68+
69+
```yaml
70+
mcp_servers:
71+
edgeone-pages:
72+
env:
73+
EDGEONE_PAGES_API_TOKEN: your-edgeone-token
74+
```
75+
76+
Get your `EDGEONE_PAGES_API_TOKEN` from https://pages.edgeone.ai/zh/document/pages-mcp
77+
5178
## Architecture Principles
5279

53-
The workflow is defined in workflow.yaml and follows a two-phase approach:
80+
The workflow is defined in [workflow.yaml](workflow.yaml) and follows a 7-stage pipeline:
5481

55-
**Design & Coding Phase:**
56-
1. A user query is given to the architecture
57-
2. The architecture produces a PRD (Product Requirements Document) & module design
58-
3. The architecture starts several tasks to finish the coding jobs
59-
4. The Design & Coding phase completes when all coding jobs are done
82+
**Standard Workflow:**
83+
1. **User Story Agent** - Parses user requirements into structured user stories
84+
2. **Architect Agent** - Selects technology stack and defines system architecture
85+
3. **File Design Agent** - Generates physical file structure from architectural blueprint
86+
4. **File Order Agent** - Constructs dependency DAG and topological sort for parallel code generation
87+
5. **Install Agent** - Bootstraps environment and resolves dependencies
88+
6. **Coding Agent** - Synthesizes code with LSP validation, following dependency order
89+
7. **Refine Agent** - Performs runtime validation, bug fixing, and automated deployment
6090

61-
**Refine Phase:**
62-
1. The first three messages are carried to the refine phase (system, query, and architecture design)
63-
2. Building begins (in this case, npm install & npm run dev/build); error messages are incorporated into the process
64-
3. The refiner distributes tasks to programmers to read files and collect information (these tasks do no coding)
65-
4. The refiner creates a fix plan with the information collected from the tasks
66-
5. The refiner distributes tasks to fix the problems
67-
6. After all problems are resolved, users can input additional requirements, and the refiner will analyze and update the code accordingly
91+
Each agent produces structured intermediate outputs, ensuring engineering rigor throughout the pipeline.
6892

6993
## Developer Guide
7094

7195
Function of each module:
7296

73-
- **workflow.yaml** - Entry configuration file used to describe the entire workflow's running process. You can add other processes
74-
- **agent.yaml** - Configuration file for each Agent in the workflow. This file is loaded in the first Agent and passed to subsequent processes
75-
- **config_handler.py** - Controls config modifications for each Agent in the workflow, for example, dynamically modifying callbacks and tools that need to be loaded for different scenarios like Architecture, Refiner, Worker, etc.
76-
- **callbacks/artifact_callback.py** - Code storage callback. All code in this project uses the following format:
77-
78-
```js:js/index.js
79-
... code ...
80-
```
81-
js/index.js is used for file storage. This callback parses all code blocks matching this format in a task and stores them as files.
82-
In this project, a worker can write multiple files because code writing is divided into different clusters, allowing more closely related modules to be written together, resulting in fewer bugs.
83-
- **callbacks/coding_callback.py** - This callback adds several necessary fields to each task's system before the `split_to_sub_task` tool is called:
84-
* Complete project design
85-
* Code standards (currently fixed to insert frontend standards)
86-
* Code generation format
87-
- **callbacks/eval_callback** - Automatically compiles npm (developers using other languages can also modify this to other compilation methods) and hands it to the Refiner for checking and fixing:
88-
* The Refiner first analyzes files that might be affected based on errors and uses `split_to_sub_task` to assign tasks for information collection
89-
* The Refiner redistributes fix tasks based on collected information, using `split_to_sub_task` for repairs
97+
- **workflow.yaml** - Entry configuration file defining the 7-stage pipeline. You can customize the workflow sequence here
98+
- **user_story.yaml / architect.yaml / file_design.yaml / file_order.yaml / install.yaml / coding.yaml / refine.yaml** - Configuration files for each agent in the workflow
99+
- **workflow/*.py** - Python implementation for each agent's logic
90100

91101
## Human Evaluation
92102

@@ -98,13 +108,5 @@ After all writing and compiling is finished, an input will be shown to enable hu
98108
* The browser console
99109
* Page errors
100110
3. After the website runs normally, you can adjust the website, add new features, or refactor something
101-
4. If you find the token cost is huge or there's an infinite loop, stop it at any time. The project serves as a cache in ~/.cache/modelscope/hub/workflow_cache
111+
4. If you find the token cost is huge or there's an infinite loop, stop it at any time.
102112
5. Feel free to optimize the code and bring new ideas
103-
104-
## TODOs
105-
106-
1. Generation is unstable
107-
2. Bug fixing cost long
108-
3. A recall tool to help locate related files and errors, preload some file content can help reduce errors
109-
* example: Error reported in scss file, but the error actually in vite.config.js
110-
4. Too much thinking
261 KB
Loading
335 KB
Loading

0 commit comments

Comments
 (0)