Skip to content

[Feature]: Tool Calling Support #1252

@dolphyisnoomi

Description

@dolphyisnoomi

Background & Description

With some new LLMs, there are new features called tool calling, such as with Qwen 4B 2507 Instruct with Qwen Agent. I would like LlamaSharp to harness this power and become more agentic, as shown in this Python example:

from qwen_agent.agents import Assistant

llm_cfg = {
    'model': 'Qwen3-4B-Instruct-2507',
    'model_server': 'http://localhost:8000/v1',  # api_base
    'api_key': 'EMPTY',
}

tools = [
    {
        'mcpServers': {
            'time': {
                'command': 'uvx',
                'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
            },
            'fetch': {
                'command': 'uvx',
                'args': ['mcp-server-fetch']
            }
        }
    },
    'code_interpreter',
]

bot = Assistant(llm=llm_cfg, function_list=tools)

messages = [{
    'role': 'user',
    'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'
}]

for responses in bot.run(messages=messages):
    pass

print(responses)

This way we can get more out of LlamaSharp inference and help the future of agentic AI.

API & Usage

Maybe you could try integrating Qwen Agent or a similar solution into LlamaSharp.

How to implement

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions