PaperChat, provide quantized models to chat with arXiv papers, to run on cpu machines. Combining paper understanding and research on multiple papers. Compiled llama.cpp already.
- This project implements a paper content dialogue based on CPU operation.
- Based on a keyword query mechanism to perform cross-paper question and answer.
- You can build your own paper data resources without needing to download papers.
1.download gguf model and put it under the master path:
2.unzip llama_cpp.rar and put it under the master path:
cd llama_cpp
llama-server.exe -m ../Qwen2.5-7B-Instruct.Q4_K_M.gguf -c 20483.start another terminal:
cd chat_ui
python main.pyThe product is licensed under The Apache License 2.0, which allows for free commercial use. Please include the link to PaperChat and the licensing terms in your product description.
The project code is still quite raw. If anyone makes improvements to the code, we welcome contributions back to this project.


