diff --git a/ai-data/generative-apis/api-cli/using-chat-api.mdx b/ai-data/generative-apis/api-cli/using-chat-api.mdx index 2a2b55e64a..b1a1236731 100644 --- a/ai-data/generative-apis/api-cli/using-chat-api.mdx +++ b/ai-data/generative-apis/api-cli/using-chat-api.mdx @@ -68,23 +68,25 @@ Our chat API is OpenAI compatible. Use OpenAI’s [API reference](https://platfo - max_tokens - stream - presence_penalty -- response_format +- [response_format](/ai-data/generative-apis/how-to/use-structured-outputs) - logprobs - stop - seed +- [tools](/ai-data/generative-apis/how-to/use-function-calling) +- [tool_choice](/ai-data/generative-apis/how-to/use-function-calling) ### Unsupported parameters - frequency_penalty - n - top_logprobs -- tools -- tool_choice - logit_bias - user If you have a use case requiring one of these unsupported parameters, please [contact us via Slack](https://slack.scaleway.com/) on #ai channel. - - Go further with [Python code examples](/ai-data/generative-apis/how-to/query-text-models/#querying-text-models-via-api) to query text models using Scaleway's Chat API. - \ No newline at end of file +## Going further + +1. [Python code examples](/ai-data/generative-apis/how-to/query-text-models/#querying-text-models-via-api) to query text models using Scaleway's Chat API. +2. [How to use structured outputs](/ai-data/generative-apis/how-to/use-structured-outputs) with the `response_format` parameter +3. [How to use function calling](/ai-data/generative-apis/how-to/use-function-calling) with `tools` and `tool_choice` \ No newline at end of file diff --git a/ai-data/generative-apis/concepts.mdx b/ai-data/generative-apis/concepts.mdx index 271e06a440..8b2e247f41 100644 --- a/ai-data/generative-apis/concepts.mdx +++ b/ai-data/generative-apis/concepts.mdx @@ -20,6 +20,10 @@ API rate limits define the maximum number of requests a user can make to the Gen A context window is the maximum amount of prompt data considered by the model to generate a response. Using models with high context length, you can provide more information to generate relevant responses. The context is measured in tokens. +## Function calling + +Function calling allows a large language model (LLM) to interact with external tools or APIs, executing specific tasks based on user requests. The LLM identifies the appropriate function, extracts the required parameters, and returns the results as structured data, typically in JSON format. + ## Embeddings Embeddings are numerical representations of text data that capture semantic information in a dense vector format. In Generative APIs, embeddings are essential for tasks such as similarity matching, clustering, and serving as inputs for downstream models. These vectors enable the model to understand and generate text based on the underlying meaning rather than just the surface-level words. diff --git a/ai-data/generative-apis/how-to/use-function-calling.mdx b/ai-data/generative-apis/how-to/use-function-calling.mdx new file mode 100644 index 0000000000..7c817d3126 --- /dev/null +++ b/ai-data/generative-apis/how-to/use-function-calling.mdx @@ -0,0 +1,331 @@ +--- +meta: + title: How to use function calling + description: Learn how to implement function calling capabilities using Scaleway's Chat Completions API service. +content: + h1: How to use function calling + paragraph: Learn how to enhance AI interactions by integrating external tools and functions using Scaleway's Chat Completions API service. +tags: chat-completions-api +dates: + validation: 2024-09-24 + posted: 2024-09-24 +--- + +Scaleway's Chat Completions API supports function calling as introduced by OpenAI. + +## What is function calling? + +Function calling allows a large language model (LLM) to interact with external tools or APIs, executing specific tasks based on user requests. The LLM identifies the appropriate function, extracts the required parameters, and returns the tool call to be done as structured data, typically in JSON format. While errors can occur, custom parsers or tools like LlamaIndex and LangChain can help ensure valid results. + + + +- Access to Generative APIs. +- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization +- A valid [API key](/identity-and-access-management/iam/how-to/create-api-keys/) for API authentication +- Python 3.7+ installed on your system + +## Supported models + +* llama-3.1-8b-instruct +* llama-3.1-70b-instruct +* mistral-nemo-instruct-2407 + +## Understanding function calling + +Function calling consists of three main components: +- **Tool definitions**: JSON schemas that describe available functions and their parameters +- **Tool selection**: Automatic or manual selection of appropriate functions based on user queries +- **Tool execution**: Processing function calls and handling their responses + +The workflow typically follows these steps: +1. Define available tools using JSON schema +2. Send system and user query along with tool definitions +3. Process model's function selection +4. Execute selected functions +5. Return results to model for final response + +## Code examples + + + Before diving into the code examples, ensure you have the necessary libraries installed: + ```bash + pip install openai + ``` + + +We will demonstrate function calling using a flight scheduling system that allows users to check available flights between European airports. + +### Basic function definition + +First, let's define our flight schedule function and its schema: + +```python +from openai import OpenAI +import json + +def get_flight_schedule(departure_airport: str, destination_airport: str, departure_date: str) -> dict: + """ + Retrieves flight schedules between two European airports on a specific date. + """ + # Mock flight schedule data + flights = { + "CDG-LHR-2024-11-01": [ + {"flight_number": "AF123", "airline": "Air France", "departure_time": "08:00", "arrival_time": "09:00"}, + {"flight_number": "BA456", "airline": "British Airways", "departure_time": "10:00", "arrival_time": "11:00"}, + {"flight_number": "LH789", "airline": "Lufthansa", "departure_time": "14:00", "arrival_time": "15:00"} + ], + "AMS-MUC-2024-11-01": [ + {"flight_number": "KL101", "airline": "KLM", "departure_time": "07:30", "arrival_time": "09:00"}, + {"flight_number": "LH202", "airline": "Lufthansa", "departure_time": "12:00", "arrival_time": "13:30"} + ] + } + + key = f"{departure_airport}-{destination_airport}-{departure_date}" + return flights.get(key, {"error": "No flights found for this route and date."}) + +# Define the tool specification +tools = [{ + "type": "function", + "function": { + "name": "get_flight_schedule", + "description": "Get available flights between two European airports on a specific date", + "parameters": { + "type": "object", + "properties": { + "departure_airport": { + "type": "string", + "description": "The IATA code of the departure airport (e.g., CDG, LHR)" + }, + "destination_airport": { + "type": "string", + "description": "The IATA code of the destination airport" + }, + "departure_date": { + "type": "string", + "description": "The date of departure in YYYY-MM-DD format" + } + }, + "required": ["departure_airport", "destination_airport", "departure_date"] + } + } +}] +``` + +### Simple function call example + +Here is how to implement a basic function call: + +```python +# Initialize the OpenAI client +client = OpenAI( + base_url="https://api.scaleway.ai/v1", + api_key="" +) + +# Create a simple query +messages = [ + { + "role": "system", + "content": "You are a helpful flight assistant." + }, + { + "role": "user", + "content": "What flights are available from CDG to LHR on November 1st, 2024?" + } +] + +# Make the API call +response = client.chat.completions.create( + model="llama-3.1-70b-instruct", + messages=messages, + tools=tools, + tool_choice="auto" +) +``` + + + The model automatically decides which functions to call. However, you can specify a particular function by using the `tool_choice` parameter. In the example above, you can replace `tool_choice=auto` with `tool_choice={"type": "function", "function": {"name": "get_flight_schedule"}}` to explicitly call the desired function. + + +### Multi-turn conversation handling + +For more complex interactions, you will need to handle multiple turns of conversation: + +```python +# Process the tool call +if response.choices[0].message.tool_calls: + tool_call = response.choices[0].message.tool_calls[0] + + # Execute the function + if tool_call.function.name == "get_flight_schedule": + function_args = json.loads(tool_call.function.arguments) + function_response = get_flight_schedule(**function_args) + + # Add results to the conversation + messages.extend([ + { + "role": "assistant", + "content": None, + "tool_calls": [tool_call] + }, + { + "role": "tool", + "name": tool_call.function.name, + "content": json.dumps(function_response), + "tool_call_id": tool_call.id + } + ]) + + # Get final response + final_response = client.chat.completions.create( + model="llama-3.1-70b-instruct", + messages=messages + ) + print(final_response.choices[0].message.content) +``` + +### Parallel function calling + + + Meta models do not support parallel tool calls. + + +In addition to one function call described above, you can also call multiple functions in a single turn. +This section shows an example for how you can use parallel function calling. + +Define the tools: + +``` +def open_floor_space(floor_number: int) -> bool: + """Opens up the specified floor for party space by unlocking doors and moving furniture.""" + print(f"Floor {floor_number} is now open party space!") + return True + +def set_lobby_vibe(party_mode: bool) -> str: + """Switches lobby screens and lighting to party mode.""" + status = "party mode activated!" if party_mode else "back to business mode" + print(f"Lobby is now in {status}") + return "The lobby is ready to party!" + +def prep_snack_station(activate: bool) -> bool: + """Converts the cafeteria into a snack and drink station.""" + print(f"Snack station is {'open and stocked!' if activate else 'closed.'}") + return True +``` + +Define the specifications: + +``` +tools = [ + { + "type": "function", + "function": { + "name": "open_floor_space", + "description": "Opens up an entire floor for the party", + "parameters": { + "type": "object", + "properties": { + "floor_number": { + "type": "integer", + "description": "Which floor to open up" + } + }, + "required": ["floor_number"] + } + } + }, + { + "type": "function", + "function": { + "name": "set_lobby_vibe", + "description": "Transform lobby atmosphere into party mode", + "parameters": { + "type": "object", + "properties": { + "party_mode": { + "type": "boolean", + "description": "True for party, False for business" + } + }, + "required": ["party_mode"] + } + } + }, + { + "type": "function", + "function": { + "name": "prep_snack_station", + "description": "Set up the snack and drink station", + "parameters": { + "type": "object", + "properties": { + "activate": { + "type": "boolean", + "description": "True to open, False to close" + } + }, + "required": ["activate"] + } + } + } +] +``` + +Next, call the model with proper instructions + +``` +system_prompt = """ +You are an office party control assistant. When asked to transform the office into a party space, you should: +1. Open up a floor for the party +2. Transform the lobby into party mode +3. Set up the snack station +Make all these changes at once for an instant office party! +""" + +messages = [ + {"role": "system", "content": system_prompt}, + {"role": "user", "content": "Turn this office building into a party!"} +] +``` + +## Best practices + +When implementing function calling, follow these guidelines for optimal results: + +1. **Function design** + - Keep function names clear and descriptive + - Limit the number of functions to 7 or fewer per conversation + - Use detailed parameter descriptions in your JSON schema + +2. **Parameter handling** + - Always specify required parameters + - Use appropriate data types and validation + - Include example values in parameter descriptions + +3. **Error handling** + - Implement robust error handling for function execution + - Return clear error messages that the model can interpret + - Handle edge cases gracefully + +4. **Performance optimization** + - Set appropriate temperature values (lower for more precise function calls) + - Cache frequently accessed data when possible + - Minimize the number of turns in multi-turn conversations + + + For production applications, always implement proper error handling and input validation. The examples above focus on the happy path for clarity. + + +## Further resources + +For more information about function calling and advanced implementations, refer to these resources: + +- [OpenAI Function Calling Guide](https://platform.openai.com/docs/guides/function-calling) +- [JSON Schema Specification](https://json-schema.org/specification) +- [Chat Completions API Reference](/ai-data/generative-apis/api-cli/using-chat-api/) + +Function calling significantly extends the capabilities of language models by allowing them to interact with external tools and APIs. + + + We can't wait to see what you will build with function calls. Tell us what you are up to, share your experiments on Scaleway's [Slack community](https://slack.scaleway.com/) #ai + \ No newline at end of file diff --git a/ai-data/generative-apis/how-to/use-structured-outputs.mdx b/ai-data/generative-apis/how-to/use-structured-outputs.mdx index f2c7b93632..35bb63f8a9 100644 --- a/ai-data/generative-apis/how-to/use-structured-outputs.mdx +++ b/ai-data/generative-apis/how-to/use-structured-outputs.mdx @@ -5,7 +5,7 @@ meta: content: h1: How to use structured outputs paragraph: Learn how to interact with powerful text models using Scaleway's Chat Completions API service. -tags: chat-completitions-api +tags: chat-completions-api dates: validation: 2024-09-17 posted: 2024-09-17 diff --git a/ai-data/managed-inference/concepts.mdx b/ai-data/managed-inference/concepts.mdx index 4745322f9c..5b84c499c3 100644 --- a/ai-data/managed-inference/concepts.mdx +++ b/ai-data/managed-inference/concepts.mdx @@ -42,6 +42,10 @@ Fine-tuning involves further training a pre-trained language model on domain-spe Few-shot prompting uses the power of language models to generate responses with minimal input, relying on just a handful of examples or prompts. It demonstrates the model's ability to generalize from limited training data to produce coherent and contextually relevant outputs. +## Function calling + +Function calling allows a large language model (LLM) to interact with external tools or APIs, executing specific tasks based on user requests. The LLM identifies the appropriate function, extracts the required parameters, and returns the results as structured data, typically in JSON format. + ## Hallucinations Hallucinations in LLMs refer to instances where generative AI models generate responses that, while grammatically coherent, contain inaccuracies or nonsensical information. These inaccuracies are termed "hallucinations" because the models create false or misleading content. Hallucinations can occur because of constraints in the training data, biases embedded within the models, or the complex nature of language itself. diff --git a/ai-data/managed-inference/reference-content/function-calling-support.mdx b/ai-data/managed-inference/reference-content/function-calling-support.mdx new file mode 100644 index 0000000000..19351f3ddb --- /dev/null +++ b/ai-data/managed-inference/reference-content/function-calling-support.mdx @@ -0,0 +1,49 @@ +--- +meta: + title: Support for function calling in Scaleway Managed Inference + description: Function calling allows models to connect to external tools. +content: + h1: Support for function calling in Scaleway Managed Inference + paragraph: Function calling allows models to connect to external tools. +tags: +categories: + - ai-data +--- + +## What is function calling? + +Function calling allows a large language model (LLM) to interact with external tools or APIs, executing specific tasks based on user requests. The LLM identifies the appropriate function, extracts the required parameters, and returns the results as structured data, typically in JSON format. While errors can occur, custom parsers or tools like LlamaIndex and LangChain can help ensure valid results. + +## How to implement function calling in Scaleway Managed Inference? + +[This tutorial](/tutorials/building-ai-application-function-calling/) will guide you through the steps of creating a simple flight schedule assistant that can understand natural language queries about flights and return structured information. + +## What are models with function calling capabilities? + +The following models in Scaleway's Managed Inference library can call tools as per the OpenAI method: + +* meta/llama-3.1-8b-instruct +* meta/llama-3.1-70b-instruct +* mistral/mistral-7b-instruct-v0.3 +* mistral/mistral-nemo-instruct-2407 + +## Understanding function calling + +Function calling consists of three main components: +- **Tool definitions**: JSON schemas that describe available functions and their parameters +- **Tool selection**: Automatic or manual selection of appropriate functions based on user queries +- **Tool execution**: Processing function calls and handling their responses + +The workflow typically follows these steps: +1. Define available tools using JSON schema +2. Send system and user query along with tool definitions +3. Process model's function selection +4. Execute selected functions +5. Return results to model for final response + +## Further resources + +For more information about function calling and advanced implementations, refer to these resources: + +- [OpenAI Function Calling Guide](https://platform.openai.com/docs/guides/function-calling) +- [JSON Schema Specification](https://json-schema.org/specification) diff --git a/ai-data/managed-inference/reference-content/openai-compatibility.mdx b/ai-data/managed-inference/reference-content/openai-compatibility.mdx index cadd000eb4..ff98d0a9de 100644 --- a/ai-data/managed-inference/reference-content/openai-compatibility.mdx +++ b/ai-data/managed-inference/reference-content/openai-compatibility.mdx @@ -48,7 +48,7 @@ chat_completion = client.chat.completions.create( "content": "Sing me a song about Scaleway" } ], - model='' #e.g 'llama-3-8b-instruct' + model='' #e.g 'meta/llama-3.1-8b-instruct:fp8' ) print(chat_completion.choices[0].message.content) @@ -71,6 +71,8 @@ print(chat_completion.choices[0].message.content) - `stop` - `seed` - `stream` +- `tools` +- `tool_choice` ### Unsupported parameters @@ -79,8 +81,6 @@ Currently, the following options are not supported: - `frequency_penalty` - `n` - `top_logprobs` -- `tools` -- `tool_choice` - `logit_bias` - `user` diff --git a/menu/navigation.json b/menu/navigation.json index a5a6fb6625..86d74e66f8 100644 --- a/menu/navigation.json +++ b/menu/navigation.json @@ -591,6 +591,10 @@ "label": "OpenAI API compatibility", "slug": "openai-compatibility" }, + { + "label": "Support for function calling", + "slug": "function-calling-support" + }, { "label": "Llama-3-8b-instruct model", "slug": "llama-3-8b-instruct" @@ -666,6 +670,10 @@ { "label": "Use structured outputs", "slug": "use-structured-outputs" + }, + { + "label": "Use function calling", + "slug": "use-function-calling" } ], "label": "How to", diff --git a/tutorials/building-ai-application-function-calling/assets/function-calling.webp b/tutorials/building-ai-application-function-calling/assets/function-calling.webp new file mode 100644 index 0000000000..befbc5094d Binary files /dev/null and b/tutorials/building-ai-application-function-calling/assets/function-calling.webp differ diff --git a/tutorials/building-ai-application-function-calling/index.mdx b/tutorials/building-ai-application-function-calling/index.mdx new file mode 100644 index 0000000000..e5fd792794 --- /dev/null +++ b/tutorials/building-ai-application-function-calling/index.mdx @@ -0,0 +1,272 @@ +--- +meta: + title: Get started with agentic AI - building a flight assistant with function calling on open-weight Llama 3.1 + description: Learn how to implement function calling in your applications using a practical flight schedule example. +content: + h1: Get started with agentic AI - building a flight assistant with function calling on open-weight Llama 3.1 + paragraph: Create a smart flight assistant that can understand natural language queries and return structured flight information using function calling capabilities. +tags: AI function-calling LLM python structured-data +categories: + - managed-inference + - generative-apis +hero: assets/function-calling.webp +dates: + validation: 2024-10-25 + posted: 2024-10-25 +--- + +In today's AI-driven world, enabling natural language interactions with structured data systems has become increasingly important. Function calling allows AI models like Llama 3.1 to bridge the gap between human queries and programmatic functions, creating powerful agents for many use cases. + +This tutorial will guide you through creating a simple flight schedule assistant that can understand natural language queries about flights and return structured information. We'll use Python and the OpenAI SDK to implement function calling on Llama 3.1, making it easy to integrate this solution into your existing applications. + + + +- A Scaleway account logged into the [console](https://console.scaleway.com) +- Python 3.7 or higher +- An API key from Scaleway [Identity and Access Management](https://www.scaleway.com/en/docs/identity-and-access-management/iam/) +- Access to Scaleway [Generative APIs](/ai-data/generative-apis/quickstart/) or to Scaleway [Managed Inference](/ai-data/managed-inference/quickstart/) +- The `openai` Python library installed + +## Understanding function calling + +Function calling allows AI models to: +- Understand when to use specific functions based on user queries +- Extract relevant parameters from natural language +- Format the extracted information into structured function calls +- Process the function results and present them in a user-friendly way + +## Setting up the environment + +1. Create a new directory for your project: + ``` + mkdir flight-assistant + cd flight-assistant + ``` + +2. Create and activate a virtual environment: + ``` + python3 -m venv venv + source venv/bin/activate # On Windows, use `venv\Scripts\activate` + ``` + +3. Install the required library: + ``` + pip install openai + ``` + +## Creating the flight schedule function + +First, let's create a simple function that returns flight schedules. Create a file called `flight_schedule.py`: + +```python +def get_flight_schedule(departure_airport: str, destination_airport: str, departure_date: str) -> dict: + """ + Get available flights between two airports on a specific date. + + Args: + departure_airport (str): IATA code of departure airport (e.g., "CDG") + destination_airport (str): IATA code of destination airport (e.g., "LHR") + departure_date (str): Date in YYYY-MM-DD format + + Returns: + dict: Available flights with their details + """ + # Mock flight database - in a real application, this would query an actual database + flights = { + "CDG-LHR-2024-11-01": [ + { + "flight_number": "AF123", + "airline": "Air France", + "departure_time": "08:00", + "arrival_time": "09:00", + "price": "€150" + }, + { + "flight_number": "BA456", + "airline": "British Airways", + "departure_time": "14:00", + "arrival_time": "15:00", + "price": "€180" + } + ] + } + + key = f"{departure_airport}-{destination_airport}-{departure_date}" + return flights.get(key, {"error": "No flights found for this route and date."}) +``` + +## Setting up the AI assistant + +Create a new file called `assistant.py` to handle the AI interactions: + +```python +from openai import OpenAI +import os +import json +from flight_schedule import get_flight_schedule + +# Initialize the OpenAI client with Scaleway configuration + +MODEL="meta/llama-3.1-70b-instruct:fp8" +# use the right name according to your Managed Inference deployment or Generative APIs model + +API_KEY = os.environ.get("SCALEWAY_API_KEY") +BASE_URL = os.environ.get("SCALEWAY_INFERENCE_ENDPOINT_URL") +# use https://api.scaleway.ai/v1 for Scaleway Generative APIs + +client = OpenAI( + base_url=BASE_URL, + api_key=API_KEY +) + +# Define the tool specification +tools = [{ + "type": "function", + "function": { + "name": "get_flight_schedule", + "description": "Get available flights between two airports on a specific date", + "parameters": { + "type": "object", + "properties": { + "departure_airport": { + "type": "string", + "description": "IATA code of departure airport (e.g., CDG, LHR)" + }, + "destination_airport": { + "type": "string", + "description": "IATA code of destination airport (e.g., CDG, LHR)" + }, + "departure_date": { + "type": "string", + "description": "Date in YYYY-MM-DD format" + } + }, + "required": ["departure_airport", "destination_airport", "departure_date"] + } + } +}] + +def process_query(user_query: str) -> str: + """Process a natural language query about flights.""" + + # Initial conversation with the model + messages = [ + { + "role": "system", + "content": "You are a helpful flight assistant. Help users find flights by calling the appropriate function." + }, + { + "role": "user", + "content": user_query + } + ] + + # Get the model's response + response = client.chat.completions.create( + model=MODEL, + messages=messages, + tools=tools, + tool_choice="auto" + ) + + # Check if the model wants to call a function + response_message = response.choices[0].message + + if response_message.tool_calls: + # Get function call details + tool_call = response_message.tool_calls[0] + function_name = tool_call.function.name + function_args = json.loads(tool_call.function.arguments) + + # Execute the function + if function_name == "get_flight_schedule": + function_response = get_flight_schedule(**function_args) + + # Add the function result to the conversation + messages.append(response_message) + messages.append({ + "role": "tool", + "content": json.dumps(function_response), + "tool_call_id": tool_call.id + }) + + # Get final response + final_response = client.chat.completions.create( + model=MODEL, + messages=messages + ) + + return final_response.choices[0].message.content + + return response_message.content +``` + +## Creating the main application + +Create a file called `main.py` to run the assistant: + +```python +from assistant import process_query + +def main(): + print("Welcome to the Flight Schedule Assistant!") + print("Ask about flights using natural language (or type 'quit' to exit)") + print("Example: What flights are available from CDG to LHR on November 1st, 2024?") + + while True: + query = input("\nYour query: ") + if query.lower() == 'quit': + break + + response = process_query(query) + print("\nAssistant:", response) + +if __name__ == "__main__": + main() +``` + +## Running the application + +1. Set your Scaleway API key: + ``` + export SCALEWAY_API_KEY="your-api-key-here" + ``` + +2. Run the application: + ``` + python main.py + ``` + +3. Try some example queries: + - "What flights are available from CDG to LHR tomorrow?" + - "Show me morning flights from Paris to London on November 1st" + - "Are there any afternoon flights from CDG to LHR on 2024-11-01?" + +## How it works + +1. **User input**: The application receives a natural language query about flights. + +2. **Function recognition**: The AI model analyzes the query and determines that it needs flight schedule information. + +3. **Parameter extraction**: The model extracts key information (airports, date) from the query. + +4. **Function calling**: The model returns the function call to be made by the user, in this case the `get_flight_schedule` function with the extracted parameters provided by the model. + +5. **Response generation**: The model receives the function's response and generates a natural language reply for the user. + +## Customizing the application + +You can enhance the flight assistant in several ways: + +1. **Add real data**: Replace the mock flight database with actual flight API calls. +2. **Expand functions**: Add functions for booking flights, checking prices, or getting airport information. +3. **Improve error handling**: Add validation for airport codes and dates. +4. **Add memory**: Implement conversation history to handle follow-up questions. + +## Conclusion + +Function calling bridges the gap between natural language processing and structured data operations. This flight schedule assistant demonstrates how to implement function calling to create intuitive interfaces for your applications. + + + Remember to handle user data responsibly and validate all inputs before making actual flight queries or bookings in a production environment. +