diff --git a/packages/backend/src/assets/ai.json b/packages/backend/src/assets/ai.json index 2d03b08a9..0602b6973 100644 --- a/packages/backend/src/assets/ai.json +++ b/packages/backend/src/assets/ai.json @@ -314,12 +314,12 @@ "description": "# Granite-4.0-H-Tiny\n\n**Model Summary:**\nGranite-4.0-H-Tiny is a 7B parameter long-context instruct model finetuned from *Granite-4.0-H-Tiny-Base* using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging. Granite 4.0 instruct models feature improved *instruction following (IF)* and *tool-calling* capabilities, making them more effective in enterprise applications.\n\n- **Developers:** Granite Team, IBM\n- **HF Collection:** [Granite 4.0 Language Models HF Collection](https://huggingface.co/collections/ibm-granite/granite-40-language-models-6811a18b820ef362d9e5a82c)\n- **GitHub Repository:** [ibm-granite/granite-4.0-language-models](https://github.com/ibm-granite/granite-4.0-language-models)\n- **Website**: [Granite Docs](https://www.ibm.com/granite/docs/) \n- **Release Date**: October 2nd, 2025\n- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)\n\n**Supported Languages:** \nEnglish, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 4.0 models for languages beyond these languages.\n\n**Intended use:** \nThe model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including business applications.\n\n*Capabilities*\n* Summarization\n* Text classification\n* Text extraction\n* Question-answering\n* Retrieval Augmented Generation (RAG)\n* Code related tasks\n* Function-calling tasks\n* Multilingual dialog use cases\n* Fill-In-the-Middle (FIM) code completions\n\n\n \n**Generation:** \nThis is a simple example of how to use Granite-4.0-H-Tiny model.\n\nInstall the following libraries:\n\n```shell\npip install torch torchvision torchaudio\npip install accelerate\npip install transformers\n```\nThen, copy the snippet from the section that is relevant for your use case.\n\n```python\nimport torch\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\n\ndevice = \"cuda\"\nmodel_path = \"ibm-granite/granite-4.0-h-tiny\"\ntokenizer = AutoTokenizer.from_pretrained(model_path)\n# drop device_map if running on CPU\nmodel = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)\nmodel.eval()\n# change input text as desired\nchat = [\n { \"role\": \"user\", \"content\": \"Please list one IBM Research laboratory located in the United States. You should only output its name and location.\" },\n]\nchat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)\n# tokenize the text\ninput_tokens = tokenizer(chat, return_tensors=\"pt\").to(device)\n# generate output tokens\noutput = model.generate(**input_tokens, \n max_new_tokens=100)\n# decode output tokens into text\noutput = tokenizer.batch_decode(output)\n# print output\nprint(output[0])\n```\n\nExpected output:\n```shell\n<|start_of_role|>user<|end_of_role|>Please list one IBM Research laboratory located in the United States. You should only output its name and location.<|end_of_text|>\n<|start_of_role|>assistant<|end_of_role|>Almaden Research Center, San Jose, California<|end_of_text|>\n```\n\n**Tool-calling:** \nGranite-4.0-H-Tiny comes with enhanced tool calling capabilities, enabling seamless integration with external functions and APIs. To define a list of tools please follow OpenAI's function [definition schema](https://platform.openai.com/docs/guides/function-calling?api-mode=responses#defining-functions). \n\nThis is an example of how to use Granite-4.0-H-Tiny model tool-calling ability:\n\n```python\ntools = [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_current_weather\",\n \"description\": \"Get the current weather for a specified city.\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"city\": {\n \"type\": \"string\",\n \"description\": \"Name of the city\"\n }\n },\n \"required\": [\"city\"]\n }\n }\n }\n]\n\n# change input text as desired\nchat = [\n { \"role\": \"user\", \"content\": \"What's the weather like in Boston right now?\" },\n]\nchat = tokenizer.apply_chat_template(chat, \\\n tokenize=False, \\\n tools=tools, \\\n add_generation_prompt=True)\n# tokenize the text\ninput_tokens = tokenizer(chat, return_tensors=\"pt\").to(device)\n# generate output tokens\noutput = model.generate(**input_tokens, \n max_new_tokens=100)\n# decode output tokens into text\noutput = tokenizer.batch_decode(output)\n# print output\nprint(output[0])\n```\n\nExpected output:\n```shell\n<|start_of_role|>system<|end_of_role|>You are a helpful assistant with access to the following tools. You may call one or more tools to assist with the user query.\n\nYou are provided with function signatures within XML tags:\n\n{\"type\": \"function\", \"function\": {\"name\": \"get_current_weather\", \"description\": \"Get the current weather for a specified city.\", \"parameters\": {\"type\": \"object\", \"properties\": {\"city\": {\"type\": \"string\", \"description\": \"Name of the city\"}}, \"required\": [\"city\"]}}}\n\n\nFor each tool call, return a json object with function name and arguments within XML tags:\n\n{\"name\": , \"arguments\": }\n. If a tool does not exist in the provided list of tools, notify the user that you do not have the ability to fulfill the request.<|end_of_text|>\n<|start_of_role|>user<|end_of_role|>What's the weather like in Boston right now?<|end_of_text|>\n<|start_of_role|>assistant<|end_of_role|>\n{\"name\": \"get_current_weather\", \"arguments\": {\"city\": \"Boston\"}}\n<|end_of_text|>\n```\n\n\n\n**Evaluation Results:** \n\n\n\n\n \n \n \n \n \n \n \n \n\n \n\n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n\n \n \n \n \n \n \n\n\n \n \n\n \n \n \n \n \n \n \n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n
BenchmarksMetricMicro DenseH Micro DenseH Tiny MoEH Small MoE
\n General Tasks\n
MMLU5-shot65.9867.4368.6578.44
MMLU-Pro5-shot, CoT44.543.4844.9455.47
BBH3-shot, CoT72.4869.3666.3481.62
AGI EVAL0-shot, CoT64.295962.1570.63
GPQA0-shot, CoT30.1432.1532.5940.63
\n Alignment Tasks\n
AlpacaEval 2.029.4931.4930.6142.48
IFEvalInstruct, Strict85.586.9484.7889.87
IFEvalPrompt, Strict79.1281.7178.185.22
IFEvalAverage82.3184.3281.4487.55
ArenaHard25.8436.1535.7546.48
\n Math Tasks\n
GSM8K8-shot85.4581.3584.6987.27
GSM8K Symbolic8-shot79.8277.581.187.38
Minerva Math0-shot, CoT62.0666.4469.6474
DeepMind Math0-shot, CoT44.5643.8349.9259.33
\n Code Tasks\n
HumanEvalpass@180818388
HumanEval+pass@172757683
MBPPpass@172738084
MBPP+pass@164646971
CRUXEval-Opass@141.541.2539.6350.25
BigCodeBenchpass@139.2137.941.0646.23
\n Tool Calling Tasks\n
BFCL v359.9857.5657.6564.69
\n Multilingual Tasks\n
MULTIPLEpass@149.2149.4655.8357.37
MMMLU5-shot55.1455.1961.8769.69
INCLUDE5-shot51.6250.5153.1263.97
MGSM8-shot28.5644.4845.3638.72
\n Safety\n
SALAD-Bench97.0696.2897.7797.3
AttaQ86.0584.4486.6186.64
\n\n\n\n \n\n \n \n \n \n \n\n\n\n \n \n \n\n\n \n \n\n \n \n\n\n \n \n \n\n\n
Multilingual Benchmarks and thr included languages:
Benchmarks# LangsLanguages
MMMLU11ar, de, en, es, fr, ja, ko, pt, zh, bn, hi
INCLUDE14hi, bn, ta, te, ar, de, es, fr, it, ja, ko, nl, pt, zh
MGSM5en, es, fr, ja, zh
\n\n**Model Architecture:** \nGranite-4.0-H-Tiny baseline is built on a decoder-only MoE transformer architecture. Core components of this architecture are: GQA, Mamba2, MoEs with shared experts, SwiGLU activation, RMSNorm, and shared input/output embeddings.\n\n\n\n \n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n \n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
ModelMicro DenseH Micro DenseH Tiny MoEH Small MoE
Embedding size2560204815364096
Number of layers40 attention4 attention / 36 Mamba24 attention / 36 Mamba24 attention / 36 Mamba2
Attention head size6464128128
Number of attention heads40321232
Number of KV heads8848
Mamba2 state size-128128128
Number of Mamba2 heads-6448128
MLP / Shared expert hidden size8192819210241536
Num. Experts--6472
Num. active Experts--610
Expert hidden size--512768
MLP activationSwiGLUSwiGLUSwiGLUSwiGLU
Sequence length128K128K128K128K
Position embeddingRoPENoPENoPENoPE
# Parameters3B3B7B32B
# Active parameters3B3B1B9B
\n\n**Training Data:** \nOverall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities, and (3) a select set of human-curated data.\n\n**Infrastructure:**\nWe trained the Granite 4.0 Language Models utilizing an NVIDIA GB200 NVL72 cluster hosted in CoreWeave. Intra-rack communication occurs via the 72-GPU NVLink domain, and a non-blocking, full Fat-Tree NDR 400 Gb/s InfiniBand network provides inter-rack communication. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.\n\n**Ethical Considerations and Limitations:** \nGranite 4.0 Instruction Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering multiple languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. While this model has been aligned by keeping safety in consideration, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. So we urge the community to use this model with proper safety testing and tuning tailored for their specific tasks.\n\n**Resources**\n- ⭐\uFE0F Learn about the latest updates with Granite: https://www.ibm.com/granite\n- \uD83D\uDCC4 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/\n- \uD83D\uDCA1 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources\n\n", "registry": "Hugging Face", "license": "Apache-2.0", - "url": "https://huggingface.co/ibm-granite/granite-4.0-h-tiny-GGUF/resolve/main/granite-4.0-h-tiny-Q4_K_M.gguf", + "url": "https://huggingface.co/ibm-granite/granite-4.0-h-tiny-GGUF/resolve/3971ea11968c34d4e4dbee55cfb55b9cba134b21/granite-4.0-h-tiny-Q4_K_M.gguf", "memory": 4224733676, "properties": { "jinja": "true" }, - "sha256": "9811e90b0eecf2b194aafad5bb386279f338a45412a9e6f86b718cca6626c495", + "sha256": "491ba81786c46a345a5da9a60cdb9f9a3056960c8411dd857153c194b1f91313", "backend": "llama-cpp" }, {