Factored Cognition with LLMs.
This is a reimagining of Ought's Factored Cognition Primer.
The only requirement (besides Python) is vLLM.
Paper extraction additionally requires pdfminer.six for reading PDFs.
Supports any model in vLLM, including quantized models.
Adding a new model class in models.py:
- The new model should be a subclass of the
Modelclass.- Include vocab size, context length, and prompt templates.
- Subclass this new model class for specific instantiations of the model (e.g. sizes or quantizations).
See LLaMa2 and LLaMa2_7B_Chat_AWQ in models.py.
- Question answering (with and without context)
- Debate (including a judge)
- Extracting title, authors, abstract, sections from PDFs
- Answering questions based on a PDF
- Recursive amplification
- Verifiers
- Verification chain
- Tool Use
- Deduction
- Action Selection
- Long texts: 3
- Amplification: 1, 2, 3
- Verifiers: 1, 2