Skip to content

Commit 9271a97

Browse files
dev-nidgianni-cor
authored andcommitted
CPP lint ran
1 parent db27625 commit 9271a97

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/llama-context.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2324,7 +2324,7 @@ void llama_context::opt_epoch(
23242324
// callback should receive ibatch for batch 2, not batch 3.
23252325
// Since idata starts at resume_from_batch+1 when resuming, we subtract 1 to get
23262326
// the correct batch number. When not resuming, idata starts at 0, so we use idata directly.
2327-
const int64_t idata_in_loop = (resume_from_batch > 0) ? (idata - 1) * ubatch_per_ctx : idata * ubatch_per_ctx;
2327+
const int64_t idata_in_loop = (resume_from_batch > 0) ? (idata - 1) * ubatch_per_ctx : idata * ubatch_per_ctx;
23282328

23292329
if (opt_loss_type == GGML_OPT_LOSS_TYPE_CROSS_ENTROPY_MASKED && ggml_opt_dataset_masks(dataset)) {
23302330
ggml_opt_dataset_get_batch_host_with_masks(dataset, tokens.data(), n_ctx*sizeof(llama_token), labels_sparse.data(), masks_sparse.data(), idata);

0 commit comments

Comments
 (0)