Skip to content

Releases: mgonzs13/llama_ros

5.3.3

06 Oct 13:17

Choose a tag to compare

Changelog from version 5.3.2 to 5.3.3:

630790c new version 5.3.3
cd5b7c5 fixing params
4c06b47 llama.cpp updated
699064f upgrading python requirements
207b64a llam.cpp updated + removing defrag_thold

5.3.2

07 Aug 17:14

Choose a tag to compare

Changelog from version 5.3.1 to 5.3.2:

1c3ffcd new version 5.3.2
bc3453a new models GPT-OSS and MiniCPM-v4
241369f llama.cpp updated - removing patch - adding new param no_extra_bufts

5.3.1

20 Jul 15:56

Choose a tag to compare

Changelog from version 5.3.0 to 5.3.1:

86a86c8 new version 5.3.1
5cda267 llama.cpp updated
0b1e16e adding logit_bias_eog + fixing ignore_eos
3d80c2b minor style fixes adding this
b875490 llama.cpp udpated

5.3.0

02 Jul 09:21

Choose a tag to compare

Changelog from version 5.2.0 to 5.3.0:

3f52a86 new version 5.3.0
eb1ed97 pddl demo added
3b509d1 adding param stream_reasoning
1dfadcc llama.cpp updated
31c8ec7 Tool calling: streaming and reasoning (#32)

5.2.0

28 Jun 16:03

Choose a tag to compare

Changelog from version 5.1.0 to 5.2.0:

2c9c555 new version 5.2.0
8b6689e adding prints to mtmd example
1c03401 llama.cpp updated
83314da hf_hub.cpp updated
7617a50 Added a missing tag (#28)
e68db43 fixing clearing mtmds
8dcec78 clear_mtmds function added to llava
77823df minor fixes for audio
bbc6817 multi audio demo added
4abd542 adding support for audio
bbf80ee fixing embeddings
ed8a215 fixing format + llama.cpp updated
2200eb2 fixing format + llama.cpp updated
ef1b853 migrating to new memory functions
273f80c fixing reranking-pooling params
8e97310 Update llama_cpp_vendor to latest version (#27)
f1a4c4a Adding InternVL3
6f03016 adding support for rolling
668c7eb adding iron to README and llama_bt
7eb9d23 testing iron workflows
ce24c07 replacing rolling with kilted
f8d9888 creating rolling workflows
c134529 simplifying ros2 distros in cmakelists
a923a10 use ament_target_dependencies for behaviortree_cpp_v3
4e09104 Removed deprecated ament_cmake_target_dependencies (#26)

5.1.0

21 May 12:22

Choose a tag to compare

Changelog from version 5.0.2 to 5.1.0:

bdc7a1c new version 5.1.0
865ff64 New mtmd (#25)
3b6c4dc llama.cpp updated

5.0.2

07 May 15:37

Choose a tag to compare

Changelog from version 5.0.1 to 5.0.2:

45986c8 new version 5.0.2
950ff26 fixing sampling params - removing unused params (penalty_prompt_tokens, use_penalty_prompt_tokens) - adding new top_n_sigma - using params in C++: min_keep, ignore_eos, dynatemp_range, dynatemp_exponent, top_n_sigma, xtc_probability, xtc_threshold, dry_multiplier, dry_base, dry_allowed_length, dry_penalty_last_n, dry_sequence_breakers
2080c12 llama.cpp updated
8a2baac adding Qwen3

5.0.1

23 Apr 10:01

Choose a tag to compare

Changelog from version 5.0.0 to 5.0.1:

3e6b736 new version 5.0.1
a34c9c8 llama.cpp updated
ef6678d hf_hub.cpp updated

5.0.0

10 Apr 18:44

Choose a tag to compare

Changelog from version 4.5.0 to 5.0.0:

14b28db new version 5.0.0
0e5f6ee fixing release workflows
3eeeb5a llama.cpp updated
fbcf3e7 replace size with empty in if
9adcab2 new gemma-3 model
9daf79f dont use hf_hub if repo or file are empty
33de5fe adding C++ comments for Doxygen
8b69ed5 updating llama.cpp + renaming param model to model_path
be739be Bt chat completions (#24)
0dfa1c7 llama.cpp updated
b1dcf2b fixing bt tests for jazzy
ca1fd20 fixing bt tests
f05428b jazzy/humble bt package
9d34176 llama.cpp updated
3235762 Comments in the messages and demo videos (#23)
14f5b63 hf_hub.cpp updated
1db6402 Chat completion fixes (#22)
eca380b fixing chatllama structured output demo
806767b fixing chatllama demos
84f7772 fixing python demos
0573cc8 temporal fix for structured demo
85f926d updating requirements
2d2c4a2 adding gemma-3 and phi-4-mini
915ad0d New Chat Completions endpoint (#21)

4.5.0

04 Mar 22:45

Choose a tag to compare

Changelog from version 4.4.1 to 4.5.0:

b47f1b3 new version 4.5.0
a3aeaef Hfhub cpp (#20)
0093120 new jazzy push workflow
63b940a new jazzy build workflow
e021979 adding ament clang to llama_ros test