Skip to content

Latest commit

 

History

History
69 lines (42 loc) · 1.64 KB

File metadata and controls

69 lines (42 loc) · 1.64 KB

🇨🇳中文

PaperChat

PaperChat, provide quantized models to chat with arXiv papers, to run on cpu machines. Combining paper understanding and research on multiple papers. Compiled llama.cpp already.

Animation Demo


Usage

  • This project implements a paper content dialogue based on CPU operation.
  • Based on a keyword query mechanism to perform cross-paper question and answer.
  • You can build your own paper data resources without needing to download papers.

Animation Demo


Requirements

1.download gguf model and put it under the master path:

https://modelscope.cn/models/QuantFactory/Qwen2.5-7B-Instruct-GGUF/resolve/master/Qwen2.5-7B-Instruct.Q4_K_M.gguf


2.unzip llama_cpp.rar and put it under the master path:

cd llama_cpp

llama-server.exe -m ../Qwen2.5-7B-Instruct.Q4_K_M.gguf -c 2048

3.start another terminal:

cd chat_ui

python main.py

Contact

License

The product is licensed under The Apache License 2.0, which allows for free commercial use. Please include the link to PaperChat and the licensing terms in your product description.

Contribute

The project code is still quite raw. If anyone makes improvements to the code, we welcome contributions back to this project.