diff --git a/docs/capabilities/code-generation.mdx b/docs/capabilities/code-generation.mdx index fc25fa1e..ccdc9c1d 100644 --- a/docs/capabilities/code-generation.mdx +++ b/docs/capabilities/code-generation.mdx @@ -350,3 +350,28 @@ messages = [ MistralAI(api_key=api_key, model=mistral_model).chat(messages) ``` Check out more details on using Instruct and Fill In Middle(FIM) with LlamaIndex in this [notebook](https://github.com/run-llama/llama_index/blob/main/docs/docs/examples/cookbooks/codestral.ipynb). + +## Integration with Portkey +Portkey provides observability, reliability, and caching layer over Codestral Instruct. Here is how you can use it with Portkey: + +```py +# make sure to install `portkey-ai` in your Python enviornment + +import os +from portkey_ai import Portkey + +portkey = Portkey( + api_key=os.environ["PORTKEY_API_KEY"], + provider="mistral-ai", + authorization="Bearer MISTRAL_API_KEY" +) + +mistral_model = "codestral-latest" +messages=[{"role": "user", "content": "Write a function for fibonacci"}] + +code_completion = portkey.chat.completions.create( + model=mistral_model, + messages=messages +) +``` +Check out more details in this [doc](https://portkey.ai/docs/welcome/integration-guides/mistral-ai). \ No newline at end of file