Skip to content

Updated existing granite gaurdian recipes#260

Open
MariaThomson wants to merge 2 commits intoibm-granite-community:mainfrom
MariaThomson:update-books
Open

Updated existing granite gaurdian recipes#260
MariaThomson wants to merge 2 commits intoibm-granite-community:mainfrom
MariaThomson:update-books

Conversation

@MariaThomson
Copy link

PR Checklist

Resolves #174

  • Updated existing recipes to latest Guardian model.

Model Interaction

  • Flexible LLM platform support The platform should be easily switchable. Use LangChain or LlamaIndex.
  • Use prompt guide corresponding to the model For example for Granite 3.x Language Models

Data

  • Example data: Follow the example data guidance.

Notebook requirements

  • Notebook outputs cleared: Ensure all notebook outputs are cleared.
  • Pre-commit hooks run: Ensure the pre-commit hooks for notebooks have been run.
  • Automated testing: Add the recipe to the automated tests as described here
  • Test in Google Colab:
    • Test that it works in Google Colab (Python 3.10.12).
    • Colab has its own package set and Python version, so ensure compatibility.
  • Test locally:
    • Ensure the code works in a fresh Python virtual environment (venv).
  • Standard access to secrets and variables Include %pip install git+https://github.com/ibm-granite-community/utils in the first code cell in order to make get_env_var available to accessing secrets and variables in the recipe.

Incoming References

  • README.md updates:
    • Add a link to the recipe in the Table of Contents (ToC).
    • Include a Colab button after that link if the notebook can be run in Colab.

GitHub

  • Commits signed: All commits must be GPG or SSH signed.
  • DCO Compliance: Developer Certificate of Origin (DCO) applies to the code, documentation, and any example data provided. Ensure commits are signed off.

Copy link
Member

@bjhargrave bjhargrave left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a big proposed change. It uses a larger model (8b) and seems to want to use vLLM for running the model. These notebooks are generally for people to be able to run locally on their workstations and generally people do not have vLLM/CUDA capable workstations. This is why the notebooks uses smaller models and run with transformers instead of vLLM.

Unfortunately, granite guardian 3.3 is not available on watsonx.ai, otherwise at least the watsonx notebook could be updated.

The only practical way to update to granite guardian 3.3 would be to use one of the quantized models (https://huggingface.co/ibm-granite/granite-guardian-3.3-8b-GGUF) on Ollama.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Granite Guardian recipes

2 participants