为什么我出来的效果非常糟糕呢? #162
Unanswered
peterlau0626
asked this question in
Q&A
Replies: 4 comments 3 replies
-
|
貌似加载的是LLaMA模型?对话问答的话用Alpaca模型体验会更好。 |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
7B Alpaca 模型,效果也十分糟糕,基本处于不可用的情况。13B之后测试。 |
Beta Was this translation helpful? Give feedback.
1 reply
-
|
额,这个效果一言难尽。和会不会关系不大,感觉对中文的指令理解有点错乱。用的网友提供的merge后的13B模型。 |
Beta Was this translation helpful? Give feedback.
1 reply
-
|
试试这个,基于gpt-x-vicuna做的,效果还不错 |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment





Uh oh!
There was an error while loading. Please reload this page.
-
我先把facebook原始模型转成hf格式,然后再使用merge_llama_with_chinese_lora_to_hf.py合并模型,两个模型都是13B的。最后使用transformers做推理。好像看起来非常糟糕。
Beta Was this translation helpful? Give feedback.
All reactions