Skip to content

Commit 09a5ff7

Browse files
author
Aki
committed
initial
1 parent 2debb57 commit 09a5ff7

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+2216
-0
lines changed

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,3 +13,6 @@ export_presets.cfg
1313
.mono/
1414
data_*/
1515
mono_crash.*.json
16+
17+
#
18+
.env.json

README.md

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,54 @@
11
# godot-groq-moa
22
Mixture of Agents using Groq with Godot (not wrapper ,can export Web)
3+
4+
this is not based langchain,however I think almost same behavior.
5+
## Web Demo (Can't use clipboard ,drop api key with json)
6+
https://akigame.itch.io/godot-llm-moa
7+
8+
## Godot Version
9+
I used Godot4.3r1 for my webexport,but it was developed under 4.2.2
10+
11+
python version - not my projects
12+
https://github.com/skapadia3214/groq-moa
13+
14+
## Final Inference.
15+
the paper seemds say refer all version(cycle 1 - last),but it would consume too much token.
16+
17+
## License
18+
All the code are MIT
19+
### prompt
20+
prompts are Apache2.0
21+
I'm not sure prompt license,but I make prompt based groq-moa version.
22+
that why it relase under Apache2.0
23+
24+
## models
25+
I recommend use llama 3.1, because of tokens.
26+
27+
### gemma2-9b-it
28+
sometime broken with other referece,stop using ref in code.
29+
### groq-tools
30+
31+
32+
## my opinion about moa
33+
I feel need more bigger LLM.
34+
main needs 405b ,agents need around 70b size
35+
36+
## TODO
37+
I'll support ollama and Gemeni,chatgpt before next month.
38+
39+
characters,add move variant images or remove them depends on response.
40+
41+
## Citation
42+
43+
This project implements the Mixture-of-Agents architecture proposed in the following paper:
44+
45+
```
46+
@article{wang2024mixture,
47+
title={Mixture-of-Agents Enhances Large Language Model Capabilities},
48+
author={Wang, Junlin and Wang, Jue and Athiwaratkun, Ben and Zhang, Ce and Zou, James},
49+
journal={arXiv preprint arXiv:2406.04692},
50+
year={2024}
51+
}
52+
```
53+
54+
For more information about the Mixture-of-Agents concept, please refer to the [original research paper](https://arxiv.org/abs/2406.04692) and the [Together AI blog post](https://www.together.ai/blog/together-moa).

0 commit comments

Comments
 (0)