Skip to content

Commit dfe299d

Browse files
authored
Update README.md
1 parent b67dd3b commit dfe299d

File tree

1 file changed

+3
-4
lines changed

1 file changed

+3
-4
lines changed

README.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
1-
# AIXpert: Factual Preference Alignment for Large Language Models
2-
1+
# Reducing Hallucinations in LLMs via Factuality-Aware Preference Learning
32
### A Modular Benchmark & Training Framework for Factual-Aware DPO
43

54
<p align="center">
@@ -18,7 +17,7 @@
1817

1918
## 🧭 About
2019

21-
**AIXpert Preference Alignment** is a full-stack **research and engineering framework** for studying and improving **factual alignment in preference-optimized Large Language Models (LLMs)**.
20+
**Factual Preference Alignment** is a **research and engineering framework** for studying and improving **factual alignment in preference-optimized Large Language Models (LLMs)**.
2221

2322
The project introduces **Factual-DPO**, a factuality-aware extension of **Direct Preference Optimization (DPO)** that incorporates:
2423

@@ -246,6 +245,6 @@ For questions, collaborations, or issues:
246245

247246
---
248247

249-
### 🚀 AIXpert advances **factually aligned, preference-optimized language models** through principled data construction, training, and evaluation.
248+
### Factual DPO promotes in reducing hallucinations and increase factualness
250249

251250
**We invite researchers and practitioners to build upon this framework.**

0 commit comments

Comments
 (0)