Skip to content

Commit cbe1c2a

Browse files
committed
readme
1 parent 7465fcc commit cbe1c2a

File tree

1 file changed

+5
-2
lines changed

1 file changed

+5
-2
lines changed

README.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
# Reward-Model
2-
Framework for reward model for RLHF.
3-
2+
Reward Model training framework for LLM RLHF. The word nemesis originally meant the distributor of fortune, neither good nor bad, simply in due proportion to each according to what was deserved. This is exactly the function of a Reward Model in RLHF.
43

54
### Quick Start
65
* Inference
@@ -17,3 +16,7 @@ tokenizer = AutoTokenizer.from_pretrained(MODEL)
1716
```bash
1817
python src/training.py --config-name <your-config-name>
1918
```
19+
20+
## Contributions
21+
* All contributions are welcome. Checkout #issues
22+
* For in-depth understanding of Reward modeling, checkout our [blog](https://explodinggradients.com/)

0 commit comments

Comments
 (0)