Skip to content

Commit cf685fe

Browse files
authored
add dev plan to readme.md (#712)
1 parent 4e5e06d commit cf685fe

File tree

4 files changed

+120
-1
lines changed

4 files changed

+120
-1
lines changed

README.CN.md

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414

1515
LazyLLM是一款低代码构建**多Agent**大模型应用的开发工具,协助开发者用极低的成本构建复杂的AI应用,并可以持续的迭代优化效果。LazyLLM提供了便捷的搭建应用的workflow,并且为应用开发过程中的各个环节提供了大量的标准流程和工具。<br>
1616
基于LazyLLM的AI应用构建流程是**原型搭建 -> 数据回流 -> 迭代优化**,即您可以先基于LazyLLM快速跑通应用的原型,再结合场景任务数据进行bad-case分析,然后对应用中的关键环节进行算法迭代和模型微调,进而逐步提升整个应用的效果。<br>
17+
LazyLLM致力于敏捷与效率的统一,开发者可以高效的迭代算法,然后将迭代好的算法应用到工业生产中,支持多用户、容错和高并发。
1718
**用户文档**https://docs.lazyllm.ai/ <br>
1819

1920
微信扫描下方二维码加入交流群(左)或通过观看视频了解更多(右)<br>
@@ -340,3 +341,62 @@ Flow 是LazyLLM中定义的数据流,描述了数据如何从一个可调用
340341
1. 您可以方便地组合、添加和替换各个模块和组件;Flow 的设计使得添加新功能变得简单,不同模块甚至项目之间的协作也变得更加容易。
341342
2. 通过一套标准化的接口和数据流机制,Flow 减少了开发人员在处理数据传递和转换时的重复工作。开发人员可以将更多精力集中在核心业务逻辑上,从而提高整体开发效率。
342343
3. 部分Flow 支持异步处理模式和并行执行,在处理大规模数据或复杂任务时,可以显著提高响应速度和系统性能。
344+
345+
346+
## 九、 后续计划
347+
348+
### 9.1 时间线
349+
V0.6 预计从9.1日开始,历时3个月,中间会不间断发布小版本,如v0.6.1, v0.6.2
350+
V0.7 预计从12.1日开始,历时3个月,中间会不间断发布小版本,如v0.7.1, v0.7.2
351+
352+
### 9.2 功能模块
353+
9.2.1 RAG
354+
- 9.2.1.1 工程
355+
- 沉淀LazyRAG中的能力到LazyLLM (V0.6 )
356+
- RAG的宏观问答能力扩展到多知识库 (V0.6 )
357+
- RAG模块完全支持横向扩容,支持多机部署RAG的算法协同工作 (V0.6 )
358+
- 知识图谱接入至少1个开源框架 (V0.6 )
359+
- 支持常用的数据切分策略,不少于20种,覆盖各种类型的文档 (V0.6 )
360+
- 9.2.1.2 数据能力
361+
- 表格解析(V0.6 - 0.7 )
362+
- CAD图片解析(V0.7 - )
363+
- 9.2.1.3 算法能力
364+
- 支持对CSV等相对结构化的文本的处理 (V0.6 )
365+
- 多跳检索(文档中的链接,参考文献等) (V0.6 )
366+
- 信息冲突处理 (V0.7 )
367+
- AgenticRL & 写代码解问题能力(V0.7 )
368+
369+
9.2.2 功能模块
370+
- 支持记忆的能力 (V0.6 )
371+
- 分布式Launcher的支持 (V0.7)
372+
- 基于数据库的Globals支持 (V0.6 )
373+
- ServerModule可以发布成mcp服务(v0.7)
374+
- 线上沙箱服务的集成(v0.7)
375+
376+
9.2.3 模型训推
377+
- 支持OpenAI接口的部署和推理 (V0.6 )
378+
- 统一微调和推理的提示词 (V0.7 )
379+
- Example中给出微调示例 (V0.7 )
380+
- 集成2-3个提示词仓库,可以直接选择提示词仓库中的提示词 (V0.6 )
381+
- 支持更智能的模型类型判断和推理框架选择,重构和简化auto-finetune选框架的逻辑 (V0.6 )
382+
- GRPO全链路支持 (V0.7 )
383+
384+
9.2.4 文档
385+
- 完善API文档,确保每个公开接口都有API文档,文档参数和函数参数一致,且有可执行的样例代码 (V0.6 )
386+
- 完善CookBook文档,案例增加至50个,并有和LangChain / Llamaindex的对比 (代码量,速度,扩展性) (V0.6 )
387+
- 完善Environment文档,补充在win/linux/macos的安装方式,补充对包的切分策略 (V0.6 )
388+
- 完善Learn文档,先教大家用大模型;然后教大家构建agent;然后教大家用workflow;再教大家搭建rag; (V0.6 )
389+
390+
9.2.5 质量
391+
- 通过对大部分模块进行Mock,将CI的时间降低到10分钟以内 (V0.6 )
392+
- 增加每日构建,高耗时 / token的任务放到每日构建中执行 (V0.6 )
393+
394+
9.2.6 开发、部署与发布
395+
- Debug优化(v0.7)
396+
- 过程监控 [输出 + 性能](v0.7)
397+
- 依赖的训推框架的环境隔离和环境的自动建设(V0.6 )
398+
399+
9.2.7 生态
400+
- 推动LazyCraft的开源 (V0.6 )
401+
- 推动LazyRAG的开源 (V0.7 )
402+
- 将代码传至Github以外的2个代码托管网站,并争取取得社区合作(V0.6 )

README.md

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -348,3 +348,61 @@ Flow in LazyLLM defines the data stream, describing how data is passed from one
348348
1. You can easily combine, add, and replace various modules and components; the design of Flow makes adding new features simple and facilitates collaboration between different modules and even projects.
349349
2. Through a standardized interface and data flow mechanism, Flow reduces the repetitive work developers face when handling data transfer and transformation. Developers can focus more on core business logic, thus improving overall development efficiency.
350350
3. Some Flows support asynchronous processing and parallel execution, significantly enhancing response speed and system performance when dealing with large-scale data or complex tasks.
351+
352+
## Future Plans
353+
354+
### Timeline
355+
V0.6 Expected to start from September 1st, lasting 3 months, with continuous small version releases in between, such as v0.6.1, v0.6.2
356+
V0.7 Expected to start from December 1st, lasting 3 months, with continuous small version releases in between, such as v0.7.1, v0.7.2
357+
358+
### Feature Modules
359+
RAG
360+
- Engineering
361+
- Integrate LazyRAG capabilities into LazyLLM (V0.6)
362+
- Extend RAG's macro Q&A capabilities to multiple knowledge bases (V0.6)
363+
- RAG modules fully support horizontal scaling, supporting multi-machine deployment of RAG algorithm collaboration (V0.6)
364+
- Integrate at least 1 open-source knowledge graph framework (V0.6)
365+
- Support common data splitting strategies, no less than 20 types, covering various document types (V0.6)
366+
- Data Capabilities
367+
- Table parsing (V0.6 - 0.7)
368+
- CAD image parsing (V0.7 -)
369+
- Algorithm Capabilities
370+
- Support processing of relatively structured texts like CSV (V0.6)
371+
- Multi-hop retrieval (links in documents, references, etc.) (V0.6)
372+
- Information conflict handling (V0.7)
373+
- AgenticRL & code-writing problem-solving capabilities (V0.7)
374+
375+
Functional Modules
376+
- Support memory capabilities (V0.6)
377+
- Support for distributed Launcher (V0.7)
378+
- Database-based Globals support (V0.6)
379+
- ServerModule can be published as MCP service (v0.7)
380+
- Integration of online sandbox services (v0.7)
381+
382+
Model Training and Inference
383+
- Support OpenAI interface deployment and inference (V0.6)
384+
- Unify fine-tuning and inference prompts (V0.7)
385+
- Provide fine-tuning examples in Examples (V0.7)
386+
- Integrate 2-3 prompt repositories, allowing direct selection of prompts from prompt repositories (V0.6)
387+
- Support more intelligent model type judgment and inference framework selection, refactor and simplify auto-finetune framework selection logic (V0.6)
388+
- Full-chain GRPO support (V0.7)
389+
390+
Documentation
391+
- Complete API documentation, ensure every public interface has API documentation, with consistent documentation parameters and function parameters, and executable sample code (V0.6)
392+
- Complete CookBook documentation, increase cases to 50, with comparisons to LangChain/LlamaIndex (code volume, speed, extensibility) (V0.6)
393+
- Complete Environment documentation, supplement installation methods on win/linux/macos, supplement package splitting strategies (V0.6)
394+
- Complete Learn documentation, first teach how to use large models; then teach how to build agents; then teach how to use workflows; finally teach how to build RAG (V0.6)
395+
396+
Quality
397+
- Reduce CI time to within 10 minutes by mocking most modules (V0.6)
398+
- Add daily builds, put high-time-consuming/token tasks in daily builds (V0.6)
399+
400+
Development, Deployment and Release
401+
- Debug optimization (v0.7)
402+
- Process monitoring [output + performance] (v0.7)
403+
- Environment isolation and automatic environment setup for dependent training and inference frameworks (V0.6)
404+
405+
Ecosystem
406+
- Promote LazyCraft open source (V0.6)
407+
- Promote LazyRAG open source (V0.7)
408+
- Upload code to 2 code hosting websites other than Github and strive for community collaboration (V0.6)

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[tool.poetry]
22
name = "lazyllm"
3-
version = "0.5.0"
3+
version = "0.6.0"
44
description = "A Low-code Development Tool For Building Multi-agent LLMs Applications."
55
authors = ["wangzhihong <wangzhihong@sensetime.com>"]
66
license = "Apache-2.0 license"

tests/charge_tests/test_doc_to_db.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ def setUpClass(cls):
1414
assert data_root_dir
1515
cls.pdf_root = os.path.join(data_root_dir, "rag_master/default/__data/pdfs")
1616

17+
@pytest.mark.skip(reason="Skip for now, will be fixed in v0.6")
1718
def test_doc_to_db_sop(self):
1819
sql_manager = SqlManager("SQLite", None, None, None, None, db_name=":memory:")
1920
documents = lazyllm.Document(dataset_path=self.pdf_root, create_ui=False)

0 commit comments

Comments
 (0)