Given the features are tokenized sentences, and the targets y_pred are normalized rankings. A typical model accepts tokenized sentences as inputs and outputs their order/ranks.
`x`: tokenize(['sentence 1', 'sentence 2', 'sentence 3', 'sentence 4'])
`y_true`: [0., 0.33333333, 0.66666667, 1.]
Which of the loss function implementations is suitable for this kind of data?