Commit d80b920
committed
feat: Add llama_model_is_hybrid API call
Also, split llama_model_is_recurrent into llm_arch_is_recurrent in
llama-arch with llama_model_is_recurrent delegating to
llm_arch_is_recurrent. The same split is done for hybird. This is needed
because there are places where the llama_model has not yet been initialized
but we need to check if the model is recurrent (specifically for the
per-layer recurrent check array in hparams).
Branch: GraniteFour
Signed-off-by: Gabe Goodhart <[email protected]>1 parent d1aec07 commit d80b920
File tree
4 files changed
+33
-8
lines changed- include
- src
4 files changed
+33
-8
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
554 | 554 | | |
555 | 555 | | |
556 | 556 | | |
| 557 | + | |
| 558 | + | |
| 559 | + | |
557 | 560 | | |
558 | 561 | | |
559 | 562 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
1744 | 1744 | | |
1745 | 1745 | | |
1746 | 1746 | | |
| 1747 | + | |
| 1748 | + | |
| 1749 | + | |
| 1750 | + | |
| 1751 | + | |
| 1752 | + | |
| 1753 | + | |
| 1754 | + | |
| 1755 | + | |
| 1756 | + | |
| 1757 | + | |
| 1758 | + | |
| 1759 | + | |
| 1760 | + | |
| 1761 | + | |
| 1762 | + | |
| 1763 | + | |
| 1764 | + | |
| 1765 | + | |
| 1766 | + | |
| 1767 | + | |
| 1768 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
435 | 435 | | |
436 | 436 | | |
437 | 437 | | |
| 438 | + | |
| 439 | + | |
| 440 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
13800 | 13800 | | |
13801 | 13801 | | |
13802 | 13802 | | |
13803 | | - | |
13804 | | - | |
13805 | | - | |
13806 | | - | |
13807 | | - | |
13808 | | - | |
13809 | | - | |
13810 | | - | |
| 13803 | + | |
| 13804 | + | |
| 13805 | + | |
| 13806 | + | |
| 13807 | + | |
13811 | 13808 | | |
13812 | 13809 | | |
13813 | 13810 | | |
| |||
0 commit comments