Skip to content

Add support for the Training Method for finetuning, and for Direct-Preference Optimization (DPO) #297

Add support for the Training Method for finetuning, and for Direct-Preference Optimization (DPO)

Add support for the Training Method for finetuning, and for Direct-Preference Optimization (DPO) #297

Triggered via pull request March 4, 2025 14:26
Status Success
Total duration 42s
Artifacts

_tests.yml

on: pull_request
Matrix: build
Fit to window
Zoom out
Zoom in