Skip to content

Commit 502812b

Browse files
committed
Remove debug assert
1 parent 105261d commit 502812b

File tree

1 file changed

+0
-2
lines changed

1 file changed

+0
-2
lines changed

src/llama-quant.cpp

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -865,8 +865,6 @@ static void llama_model_quantize_impl(const std::string & fname_inp, const std::
865865
is_clip_model |= name.rfind("mm.", 0) == 0; // check the "mm." prefix
866866
}
867867

868-
GGML_ASSERT(qs.n_ffn_down_exp != 0);
869-
870868
qs.n_ffn_down = qs.n_ffn_gate = qs.n_ffn_up = (int)model.hparams.n_layer;
871869

872870
// sanity checks for models that have attention layers

0 commit comments

Comments
 (0)