From 23fbfa8c1c695cbdde7d711aeaa26aa046bb77f5 Mon Sep 17 00:00:00 2001 From: Ryan Cartwright Date: Wed, 13 Nov 2024 17:29:52 +1100 Subject: [PATCH 1/9] add rag guide --- docs/guides/python/llama-rag.mdx | 290 +++++++++++++++++++++++++++++++ 1 file changed, 290 insertions(+) create mode 100644 docs/guides/python/llama-rag.mdx diff --git a/docs/guides/python/llama-rag.mdx b/docs/guides/python/llama-rag.mdx new file mode 100644 index 000000000..d04bc15dc --- /dev/null +++ b/docs/guides/python/llama-rag.mdx @@ -0,0 +1,290 @@ +--- +description: 'Transform your LLMs with Retrieval Augmented Generation' +tags: + - API + - AI & Machine Learning +languages: + - python +published_at: 2024-11-16 +updated_at: 2024-11-16 +--- + +# Using Retrieval Augmented Generation to enhance your LLMs + +This guide shows how to use Retrieval Augmented Generation (RAG) to enhance a large language model (LLM). RAG is the process of enabling an LLM to reference context outside of its intial training data before generating its response. It can be extremely expensive in both time and computing power to train a model that is useful for your own domain-specific purposes. Therefore, using RAG is a cost-effective alternative to extending the capabilities of an existing LLM. + +## Prerequisites + +- [uv](https://docs.astral.sh/uv/#getting-started) - for Python dependency management +- The [Nitric CLI](/get-started/installation) +- _(optional)_ An [AWS](https://aws.amazon.com) account + +## Getting started + +We'll start by creating a new project using Nitric's python starter template. + + + If you want to take a look at the finished code, it can be found + [here](https://github.com/nitrictech/examples/tree/main/v1/llama-rag). + + +```bash +nitric new llama-rag py-starter +cd llama-rag +``` + +Next, let's install our base dependencies, then add the `llama-index` libraries. We'll be using [llama index](https://docs.llamaindex.ai/en/stable/) as it makes creating RAGs extremely simple and has support for running our own local Llama 3.2 models. + +```bash +# Install the base dependencies +uv sync +# Add Llama index dependencies +uv add llama-index llama-index-embeddings-huggingface llama-index-llama-cpp +``` + +We'll organize our project structure like so: + +```text ++--common/ +| +-- __init__.py +| +-- model_parameters.py ++--model/ +| +-- Llama-3.2-1B-Instruct-Q4_K_M.gguf ++--services/ +| +-- api.py ++--.gitignore ++--.python-version ++-- build_query_engine.py ++-- pyproject.toml ++-- python.dockerfile ++-- python.dockerignore ++-- nitric.yaml ++-- README.md +``` + +## Setting up our LLM + +Before we even start writing code for our LLM we'll want to download the model into our project. For this project we'll be using Llama 3.2 with a Q4_K_M quant. + +```bash +mkdir model +cd model +curl -OL https://huggingface.co/bartowski/Llama-3.2-1B-Instruct-GGUF/resolve/main/Llama-3.2-1B-Instruct-Q4_K_M.gguf +cd .. +``` + +Now that we have our model we can load it into our code. We'll also define our [embed model](https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings/) - for vectorising our documentation - using a recommend [embed model](https://huggingface.co/BAAI/bge-large-en-v1.5) from Hugging Face. At this point we can also create a prompt template for prompts with our query engine. It will just sanitise some of the hallucinations so that if the model does not know an answer it won't pretend like it does. + +```python title:common/model_paramters.py +from llama_index.core import ChatPromptTemplate +from llama_index.embeddings.huggingface import HuggingFaceEmbedding +from llama_index.llms.llama_cpp import LlamaCPP + + +# Load the locally stored Llama model +llm = LlamaCPP( + model_url=None, + model_path="./model/Llama-3.2-1B-Instruct-Q4_K_M.gguf", + temperature=0.7, + verbose=False, +) + +# Load the embed model from hugging face +embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-large-en-v1.5", trust_remote_code=True) + +# Set the location that we will persist our embeds +persist_dir = "query_engine_vectors" + +# Create the prompt query templates to sanitise hallucinations +text_qa_template = ChatPromptTemplate.from_messages([ + ( + "system", + "If the context is not useful, respond with 'I'm not sure'.", + ), + ( + "user", + ( + "Context information is below.\n" + "---------------------\n" + "{context_str}\n" + "---------------------\n" + "Given the context information and not prior knowledge " + "answer the question: {query_str}\n." + ) + ), +]) +``` + +## Building a Query Engine + +The next step is where we embed our context into the LLM. For this example we can embed the Nitric documentation to allow searchability using the LLM. It's open-source on [GitHub](https://github.com/nitrictech/docs), so we can clone it into our project. + +```bash +git clone https://github.com/nitrictech/docs.git nitric-docs +``` + +We can then create our embedding and store it locally. + +```python title:build_query_engine.py +from common.model_parameters import llm, embed_model, persist_dir + +from llama_index.core import SimpleDirectoryReader, VectorStoreIndex, Settings + + +# Set global settings for llama index +Settings.llm = llm +Settings.embed_model = embed_model + +# Load data from the documents directory +loader = SimpleDirectoryReader( + # The location of the documents you want to embed + input_dir = "./nitric-docs/", + # Set the extension to what format your documents are in + required_exts=[".mdx"], + # Search through documents recursively + recursive=True +) +docs = loader.load_data() + +# Embed the docs into the Llama model +index = VectorStoreIndex.from_documents(docs, show_progress=True) + +# Save the query engine index to the local machine +index.storage_context.persist(persist_dir) +``` + +You can then run this using the following command. This should output the embeds into your `persist_dir`. + +```bash +uv run build_query_engine.py +``` + +## Creating an API for querying our model + +With our LLM ready for querying, we can create an API to handle prompts. + +```python +import os + +from common.model_parameters import embed_model, llm, text_qa_template, persist_dir + +from nitric.resources import api +from nitric.context import HttpContext +from nitric.application import Nitric +from llama_index.core import StorageContext, load_index_from_storage, Settings + +# Set global settings for llama index +Settings.llm = llm +Settings.embed_model = embed_model + +main_api = api("main") + +@main_api.post("/prompt") +async def query_model(ctx: HttpContext): + # Pull the data from the request body + query = str(ctx.req.data) + + print(f"Querying model: \"{query}\"") + + # Get the model from the stored local context + if os.path.exists(persist_dir): + storage_context = StorageContext.from_defaults(persist_dir=persist_dir) + + index = load_index_from_storage(storage_context) + + # Get the query engine from the index, and use the prompt template for santisation. + query_engine = index.as_query_engine(streaming=False, similarity_top_k=4, text_qa_template=text_qa_template) + else: + print("model does not exist") + ctx.res.success= False + return ctx + + # Query the model + response = query_engine.query(query) + + ctx.res.body = f"{response}" + + print(f"Response: \n{response}") + + return ctx + +Nitric.run() +``` + +## Test it locally + +Now that you have an API defined, we can test it locally. You can do this using `nitric start` and make a request to the API either through the [Nitric Dashboard](/get-started/foundations/projects/local-development#local-dashboard) or another HTTP client like cURL. + +```bash +curl -X POST http://localhost:4001/prompt -d "What is Nitric?" +``` + +This should produce an output similar to: + +```text +Nitric is a cloud-agnostic framework designed to aid developers in building full cloud applications, including infrastructure. It is a declarative cloud framework with common resources like APIs, websockets, databases, queues, topics, buckets, and more. The framework provides tools for locally simulating a cloud environment, to allow an application to be tested locally, and it makes it possible to interact with resources at runtime. It is a lightweight and flexible framework that allows developers to structure their projects according to their preferences and needs. Nitric is not a replacement for IaC tools like Terraform but rather introduces a method of bringing developer self-service for infrastructure directly into the developer application. Nitric can be augmented through use of tools like Pulumi or Terraform and even be fully customized using such tools. The framework supports multiple programming languages, and its default deployment engines are built with Pulumi. Nitric provides tools for defining services in your project's `nitric.yaml` file, and each service can be run independently, allowing your app to scale and manage different workloads efficiently. Services are the heart of Nitric apps, they're the entrypoints to your code. They can serve as APIs, websockets, schedule handlers, subscribers and a lot more. +``` + +## Get ready for deployment + +Now that its tested locally, we can get our project ready for containerization. The default python dockerfile uses `python3.11-bookworm-slim` as its basic container image, which doesn't have the right dependencies to load the Llama model. So, all we need to do is update the Dockerfile to use python3.11-bookworm (the non-slim version) instead. + +Update line 2: + +```dockerfile title:python.dockerfile +# !diff - +FROM ghcr.io/astral-sh/uv:python3.11-bookworm-slim AS builder +# !diff + +FROM ghcr.io/astral-sh/uv:python3.11-bookworm AS builder +``` + +And line 18: + +```dockerfile title:python.dockerfile +# !diff - +FROM python:3.11-slim-bookworm +# !diff + +FROM python:3.11-bookworm +``` + +When you're ready to deploy the project, we can create a new Nitric stack file that will target AWS: + +```bash +nitric stack new dev aws +``` + +Update the stack file `nitric.dev.yaml` with the appropriate AWS region and memory allocation to handle the model: + +```yaml title:nitric.dev.yaml +provider: nitric/aws@1.14.0 +region: us-east-1 +config: + # How services will be deployed by default, if you have other services not running models + # you can add them here too so they don't use the same configuration + default: + lambda: + # Set the memory to 6GB to handle the model, this automatically sets additional CPU allocation + memory: 6144 + # Set a timeout of 30 seconds (this is the most API Gateway will wait for a response) + timeout: 30 + # We add more storage to the lambda function, so it can store the model + ephemeral-storage: 1024 +``` + +We can then deploy using the following command: + +```bash +nitric up +``` + +Testing on AWS will be the same as we did locally, we'll just use cURL to make a request to the API URL that was outputted at the end of the deployment. + +```bash +curl -x POST {your AWS endpoint URL here}/prompt -d "What is Nitric?" +``` + +Once you're finished querying the model, you can destroy the deployment using `nitric down`. + +## Summary + +In this project we've successfully augmented an LLM using Retrieval Augmented Generation (RAG) techniques with Llama Index and Nitric. You can modify this project to use any LLM, change the prompt template to be more specific in responses, or change the context for your own personal requirements. We could extend this project to maintain context between requests using WebSockets to have more of a chat-like experience with the model. From d6e712d98cc3770f42f062e3a7528840245f1877 Mon Sep 17 00:00:00 2001 From: Ryan Cartwright Date: Tue, 24 Dec 2024 11:36:39 +1100 Subject: [PATCH 2/9] update rag guide to use websocket --- docs/guides/python/llama-rag.mdx | 499 +++++++++++++++----- public/images/guides/llama-rag/featured.png | Bin 0 -> 60745 bytes 2 files changed, 375 insertions(+), 124 deletions(-) create mode 100644 public/images/guides/llama-rag/featured.png diff --git a/docs/guides/python/llama-rag.mdx b/docs/guides/python/llama-rag.mdx index d04bc15dc..6a5faf409 100644 --- a/docs/guides/python/llama-rag.mdx +++ b/docs/guides/python/llama-rag.mdx @@ -1,17 +1,21 @@ --- -description: 'Transform your LLMs with Retrieval Augmented Generation' +description: 'Making LLMs smarter with Extensible Knowledge Access using Retrieval Augmented Generation' tags: - - API + - Realtime & Websockets - AI & Machine Learning languages: - python -published_at: 2024-11-16 -updated_at: 2024-11-16 +featured: + image: /docs/images/guides/llama-rag/featured.png + image_alt: 'Llama RAG featured image' +published_at: 2024-11-21 +updated_at: 2024-11-21 --- -# Using Retrieval Augmented Generation to enhance your LLMs +# Making LLMs smarter with Extensible Knowledge Access -This guide shows how to use Retrieval Augmented Generation (RAG) to enhance a large language model (LLM). RAG is the process of enabling an LLM to reference context outside of its intial training data before generating its response. It can be extremely expensive in both time and computing power to train a model that is useful for your own domain-specific purposes. Therefore, using RAG is a cost-effective alternative to extending the capabilities of an existing LLM. +This guide shows how to use Retrieval Augmented Generation (RAG) to enhance a large language model (LLM). RAG is the process of enabling an LLM to reference context outside of its initial training data before generating its response. Training a model that is useful for your own domain-specific purposes can be extremely expensive in both time and computing power. Therefore, using RAG is a cost-effective alternative to extending the capabilities of an existing LLM. +To demonstrate RAG in this guide, we'll provide Llama 3.2 with access to Nitric's documentation so that it can answer specific questions. You can adjust this guide with another data source that meets your needs. ## Prerequisites @@ -39,22 +43,30 @@ Next, let's install our base dependencies, then add the `llama-index` libraries. # Install the base dependencies uv sync # Add Llama index dependencies -uv add llama-index llama-index-embeddings-huggingface llama-index-llama-cpp +uv add llama-index llama-index-embeddings-huggingface llama-index-llms-llama-cpp --optional ml ``` + + We add the extra dependencies to the 'ml' optional dependencies to keep them + separate since they can be quite large. This lets us just install them in the + containers that need them. + + We'll organize our project structure like so: ```text +--common/ | +-- __init__.py | +-- model_parameters.py -+--model/ -| +-- Llama-3.2-1B-Instruct-Q4_K_M.gguf +| +-- resources.py +--services/ -| +-- api.py +| +-- subscriber.py +| +-- chat.py +--.gitignore +--.python-version -+-- build_query_engine.py ++-- model.dockerfile ++-- model.dockerignore ++-- model_utilities.py +-- pyproject.toml +-- python.dockerfile +-- python.dockerignore @@ -64,147 +76,337 @@ We'll organize our project structure like so: ## Setting up our LLM -Before we even start writing code for our LLM we'll want to download the model into our project. For this project we'll be using Llama 3.2 with a Q4_K_M quant. +We'll define a `ModelParameters` class which will have parameters used throughout our application. By putting it in a class, it means it will lazily load the LLM, [embed model](https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings/), and tokenizer so that it doesn't slow down other modules that don't require everything to be initialized. At this point we can also create a prompt template for prompts with our query engine. It will just sanitize some of the hallucinations so that if the model does not know an answer it won't pretend like it does. We'll also define two functions that will convert a prompt or message into the required Llama 3.2 format. -```bash -mkdir model -cd model -curl -OL https://huggingface.co/bartowski/Llama-3.2-1B-Instruct-GGUF/resolve/main/Llama-3.2-1B-Instruct-Q4_K_M.gguf -cd .. -``` +```python title:common/model_parameters.py +import os -Now that we have our model we can load it into our code. We'll also define our [embed model](https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings/) - for vectorising our documentation - using a recommend [embed model](https://huggingface.co/BAAI/bge-large-en-v1.5) from Hugging Face. At this point we can also create a prompt template for prompts with our query engine. It will just sanitise some of the hallucinations so that if the model does not know an answer it won't pretend like it does. - -```python title:common/model_paramters.py -from llama_index.core import ChatPromptTemplate -from llama_index.embeddings.huggingface import HuggingFaceEmbedding -from llama_index.llms.llama_cpp import LlamaCPP - - -# Load the locally stored Llama model -llm = LlamaCPP( - model_url=None, - model_path="./model/Llama-3.2-1B-Instruct-Q4_K_M.gguf", - temperature=0.7, - verbose=False, -) - -# Load the embed model from hugging face -embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-large-en-v1.5", trust_remote_code=True) - -# Set the location that we will persist our embeds -persist_dir = "query_engine_vectors" - -# Create the prompt query templates to sanitise hallucinations -text_qa_template = ChatPromptTemplate.from_messages([ - ( - "system", - "If the context is not useful, respond with 'I'm not sure'.", - ), - ( - "user", - ( - "Context information is below.\n" - "---------------------\n" - "{context_str}\n" - "---------------------\n" - "Given the context information and not prior knowledge " - "answer the question: {query_str}\n." - ) - ), -]) + +# Convert the messages into Llama 3.2 format +def messages_to_prompt(messages): + prompt = "" + for message in messages: + if message.role == 'system': + prompt += f"<|system|>\n{message.content}\n" + elif message.role == 'user': + prompt += f"<|user|>\n{message.content}\n" + elif message.role == 'assistant': + prompt += f"<|assistant|>\n{message.content}\n" + + # ensure we start with a system prompt, insert blank if needed + if not prompt.startswith("<|system|>\n"): + prompt = "<|system|>\n\n" + prompt + + # add final assistant prompt + prompt = prompt + "<|assistant|>\n" + + return prompt + +# Convert the completed prompt into Llama 3.2 format +def completion_to_prompt(completion): + return f"<|system|>\n\n<|user|>\n{completion}\n<|assistant|>\n" + +class ModelParameters: + # Lazily loaded llm + _llm = None + + # Lazily loaded embed model + _embed_model = None + + # Lazily loaded tokenizer + _tokenizer = None + + # Set the location that we will persist our embeds + persist_dir = "./models/query_engine_db" + + # Set the location to cache the embed model + embed_cache_folder = os.getenv("HF_CACHE") or "./models/vector_model_cache" + + # Set the location to store the llm + llm_cache_folder = "./models/llm_cache" + + # Create the prompt query templates to sanitise hallucinations + prompt_template = ( + "Context information is below. If the context is not useful, respond with 'I'm not sure'. " + "Given the context information and not prior knowledge answer the prompt.\n" + "{context_str}\n" + ) + + + def __init__(self): + # Lazily load the locally stored Llama model + self._llm = None + # Lazily load the Embed from Hugging Face model + self._embed_model = None + # Lazily loaded the tokenizer + self._tokenizer = None + + @property + def llm(self): + from llama_index.llms.llama_cpp import LlamaCPP + + if self._llm is None: + print("Initializing Llama CPP Model...") + self._llm = LlamaCPP( + model_path=f"{self.llm_cache_folder}/Llama-3.2-1B-Instruct-Q4_K_M.gguf", + temperature=0.7, + # Increase for longer responses + max_new_tokens=512, + context_window=3900, + generate_kwargs={}, + # set to at least 1 to use GPU + model_kwargs={"n_gpu_layers": 1}, + # transform inputs into Llama3.2 format + messages_to_prompt=messages_to_prompt, + completion_to_prompt=completion_to_prompt, + verbose=False, + ) + return self._llm + + @property + def embed_model(self): + from llama_index.embeddings.huggingface import HuggingFaceEmbedding + + if self._embed_model is None: + print("Initializing Embed Model...") + self._embed_model = HuggingFaceEmbedding( + model_name=self.embed_cache_folder, + cache_folder=self.embed_cache_folder + ) + return self._embed_model + + @property + def tokenizer(self): + from transformers import AutoTokenizer + + if self._tokenizer is None: + print("Initializing Tokenizer") + self._tokenizer = AutoTokenizer.from_pretrained( + "pcuenq/Llama-3.2-1B-Instruct-tokenizer" + ).encode + return self._tokenizer ``` ## Building a Query Engine -The next step is where we embed our context into the LLM. For this example we can embed the Nitric documentation to allow searchability using the LLM. It's open-source on [GitHub](https://github.com/nitrictech/docs), so we can clone it into our project. +The next step is where we embed our context into the LLM. For this example we will embed the Nitric documentation. It's open-source on [GitHub](https://github.com/nitrictech/docs), so we can clone it into our project. You can use any documentation for this step. ```bash git clone https://github.com/nitrictech/docs.git nitric-docs ``` -We can then create our embedding and store it locally. +We'll create a script which will download the [LLM](https://huggingface.co/bartowski/Llama-3.2-1B-Instruct-GGUF), the embed model (using a recommended [model](https://huggingface.co/BAAI/bge-large-en-v1.5) from Hugging Face), and convert the documentation into a vector model using the embed model. -```python title:build_query_engine.py -from common.model_parameters import llm, embed_model, persist_dir +```python title:model_utilities.py +import os +from urllib.request import urlretrieve + +from common.model_parameters import ModelParameters from llama_index.core import SimpleDirectoryReader, VectorStoreIndex, Settings +from huggingface_hub import snapshot_download +# !collapse(1:17) collapsed +def build_query_engine(): + params = ModelParameters() -# Set global settings for llama index -Settings.llm = llm -Settings.embed_model = embed_model + Settings.llm = params.llm + Settings.embed_model = params.embed_model -# Load data from the documents directory -loader = SimpleDirectoryReader( - # The location of the documents you want to embed - input_dir = "./nitric-docs/", - # Set the extension to what format your documents are in + # load data + loader = SimpleDirectoryReader( + input_dir = "nitric-docs/", required_exts=[".mdx"], - # Search through documents recursively recursive=True -) -docs = loader.load_data() + ) + docs = loader.load_data() + + index = VectorStoreIndex.from_documents(docs, show_progress=True) + + index.storage_context.persist(params.persist_dir) + +# !collapse(1:15) collapsed +def download_embed_model(): + print(f"Downloading embed model to {ModelParameters.embed_cache_folder}") + + dir = snapshot_download("BAAI/bge-large-en-v1.5", + local_dir= ModelParameters.embed_cache_folder, + allow_patterns=[ + "*.json", + "vocab.txt", + "onnx", + "1_Pooling", + "model.safetensors" + ] + ) + + print(f"Downloaded model to {dir}") + +# !collapse(1:16) collapsed +def download_llm(): + print(f"Downloading llm to {ModelParameters.llm_cache_folder}") + + llm_download_location = f"{ModelParameters.llm_cache_folder}/Llama-3.2-1B-Instruct-Q4_K_M.gguf" -# Embed the docs into the Llama model -index = VectorStoreIndex.from_documents(docs, show_progress=True) + if os.path.isfile(llm_download_location): + print("Model already exists.") + return -# Save the query engine index to the local machine -index.storage_context.persist(persist_dir) + os.mkdir(ModelParameters.llm_cache_folder) + + llm_download_url = f"https://huggingface.co/bartowski/Llama-3.2-1B-Instruct-GGUF/resolve/main/Llama-3.2-1B-Instruct-Q4_K_M.gguf" + llm_dir = urlretrieve(llm_download_url, llm_download_location) + + print(f"Download model to {llm_dir[0]}") + + +download_embed_model() +download_llm() +build_query_engine() ``` -You can then run this using the following command. This should output the embeds into your `persist_dir`. +You can then run the script using the following command. This should output the models and the vector model into the `./models` folder. ```bash -uv run build_query_engine.py +uv run model_utilities.py +``` + +## Create our resources + +Let's create our resources in a common file so that it can be imported to the subscriber and chat modules. We'll create a websocket which will interface with the user for prompts and create a topic to handle the backend query engine. The websocket will trigger the topic on a prompt message, which will trigger the subscriber to handle the prompt. Once the subscriber is finished it will send a response to the socket. It is done this way with the topic so that the websocket doesn't time out after 30 seconds, as most queries will take longer than that to process. + +```python title:common/resources.py +from nitric.resources import websocket, topic + +socket = websocket("socket") +chat_topic = topic("chat") ``` -## Creating an API for querying our model +## Use the resources for querying the model + +With our LLM downloaded and given the context documentation for querying, we can use our websocket to handle prompts. The main piece of logic here is publishing to the chat topic + +```python title:services/chat.py +from common.resources import socket, chat_topic + +from nitric.context import WebsocketContext +from nitric.application import Nitric + +publishable_chat_topic = chat_topic.allow("publish") + +@socket.on("connect") +async def on_connect(ctx): + # handle connections + print(f"socket connected with {ctx.req.connection_id}") + return ctx + +@socket.on("disconnect") +async def on_disconnect(ctx): + # handle disconnections + print(f"socket disconnected with {ctx.req.connection_id}") + return ctx -With our LLM ready for querying, we can create an API to handle prompts. +@socket.on("message") +async def on_message(ctx: WebsocketContext): + # Publish to the topic with the connection id and the prompt. + await publishable_chat_topic.publish({ + "connection_id": ctx.req.connection_id, + "prompt": ctx.req.data.decode("utf-8") + }) -```python + return ctx + +Nitric.run() +``` + +We'll then create our subscriber which will respond to the publish requests. + +```python title:services/subscriber.py import os -from common.model_parameters import embed_model, llm, text_qa_template, persist_dir +from common.model_parameters import ModelParameters +from common.resources import chat_topic, socket, connections -from nitric.resources import api -from nitric.context import HttpContext +from nitric.context import MessageContext from nitric.application import Nitric from llama_index.core import StorageContext, load_index_from_storage, Settings +from llama_index.core.llms import MessageRole, ChatMessage +from llama_index.core.chat_engine import ContextChatEngine + +read_write_connections = connections.allow("get", "set") -# Set global settings for llama index -Settings.llm = llm -Settings.embed_model = embed_model +@chat_topic.subscribe() +async def query_model(ctx: MessageContext) -> str: + params = ModelParameters() -main_api = api("main") + Settings.llm = params.llm + Settings.embed_model = params.embed_model + Settings.tokenizer = params.tokenizer -@main_api.post("/prompt") -async def query_model(ctx: HttpContext): - # Pull the data from the request body - query = str(ctx.req.data) + connection_id = ctx.req.data.get("connection_id") + prompt = ctx.req.data.get("prompt") - print(f"Querying model: \"{query}\"") + connection_metadata = await read_write_connections.get(connection_id) # Get the model from the stored local context - if os.path.exists(persist_dir): - storage_context = StorageContext.from_defaults(persist_dir=persist_dir) + if os.path.exists(ModelParameters.persist_dir): + print("Loading model from storage...") + storage_context = StorageContext.from_defaults(persist_dir=params.persist_dir) index = load_index_from_storage(storage_context) - - # Get the query engine from the index, and use the prompt template for santisation. - query_engine = index.as_query_engine(streaming=False, similarity_top_k=4, text_qa_template=text_qa_template) else: print("model does not exist") - ctx.res.success= False + ctx.res.success = False return ctx - # Query the model - response = query_engine.query(query) + # Create a list of chat messages from the chat history + chat_history = [] + for chat in connection_metadata.get("context"): + chat_history.append( + ChatMessage( + role=chat.get("role"), + content=chat.get("content") + ) + ) + + # Create the chat engine + retriever = index.as_retriever( + similarity_top_k=4, + ) - ctx.res.body = f"{response}" + chat_engine = ContextChatEngine.from_defaults( + retriever=retriever, + chat_history=chat_history, + context_template=params.prompt_template, + streaming=False, + ) - print(f"Response: \n{response}") + # Query the model + assistant_response = chat_engine.chat(f"{prompt}") + + print(f"Response: {assistant_response}") + + # Send the response to the socket connection + await socket.send( + connection_id, + assistant_response.response.encode("utf-8") + ) + + # Add the context to th connections store + await read_write_connections.set(connection_id, { + "context": [ + *connection_metadata.get("context"), + { + "role": MessageRole.USER, + "content": prompt, + }, + { + "role": MessageRole.ASSISTANT, + "content": assistant_response.response + } + ] + } + ) return ctx @@ -213,40 +415,91 @@ Nitric.run() ## Test it locally -Now that you have an API defined, we can test it locally. You can do this using `nitric start` and make a request to the API either through the [Nitric Dashboard](/get-started/foundations/projects/local-development#local-dashboard) or another HTTP client like cURL. - -```bash -curl -X POST http://localhost:4001/prompt -d "What is Nitric?" -``` - -This should produce an output similar to: +Now that our application is complete, we can test it locally. You can do this using `nitric start` and connecting to the websocket through either the [Nitric Dashboard](/get-started/foundations/projects/local-development#local-dashboard) or another Websocket client. Once connected, you can send a message with a prompt to the model. Sending a prompt like "What is Nitric?" should produce an output similar to: ```text -Nitric is a cloud-agnostic framework designed to aid developers in building full cloud applications, including infrastructure. It is a declarative cloud framework with common resources like APIs, websockets, databases, queues, topics, buckets, and more. The framework provides tools for locally simulating a cloud environment, to allow an application to be tested locally, and it makes it possible to interact with resources at runtime. It is a lightweight and flexible framework that allows developers to structure their projects according to their preferences and needs. Nitric is not a replacement for IaC tools like Terraform but rather introduces a method of bringing developer self-service for infrastructure directly into the developer application. Nitric can be augmented through use of tools like Pulumi or Terraform and even be fully customized using such tools. The framework supports multiple programming languages, and its default deployment engines are built with Pulumi. Nitric provides tools for defining services in your project's `nitric.yaml` file, and each service can be run independently, allowing your app to scale and manage different workloads efficiently. Services are the heart of Nitric apps, they're the entrypoints to your code. They can serve as APIs, websockets, schedule handlers, subscribers and a lot more. +Nitric is a cloud-agnostic framework designed to aid developers in building full cloud applications, including infrastructure. ``` ## Get ready for deployment -Now that its tested locally, we can get our project ready for containerization. The default python dockerfile uses `python3.11-bookworm-slim` as its basic container image, which doesn't have the right dependencies to load the Llama model. So, all we need to do is update the Dockerfile to use python3.11-bookworm (the non-slim version) instead. +Now that its tested locally, we can get our project ready for containerization. The default python dockerfile uses `python3.11-bookworm-slim` as its basic container image, which doesn't have the right dependencies to load the Llama model. So, we'll start by creating a new python Dockerfile which uses python3.11-bookworm (the non-slim version) instead. We'll keep the default dockerfile for our `chat` service but use the new Dockerfile for the `subscriber` service. Update line 2: -```dockerfile title:python.dockerfile +```dockerfile title:model.dockerfile # !diff - FROM ghcr.io/astral-sh/uv:python3.11-bookworm-slim AS builder # !diff + FROM ghcr.io/astral-sh/uv:python3.11-bookworm AS builder ``` -And line 18: +And line 17: -```dockerfile title:python.dockerfile +```dockerfile title:model.dockerfile # !diff - FROM python:3.11-slim-bookworm # !diff + FROM python:3.11-bookworm ``` +We'll also change the `model.dockerfile` to download the extra ml dependencies. + +```dockerfile title:model.dockerfile +RUN --mount=type=cache,target=/root/.cache/uv \ + # !diff - + uv sync --frozen --no-install-project --no-dev --no-python-downloads + # !diff + + uv sync --extra ml --frozen --no-install-project --no-dev --no-python-downloads +COPY . /app +RUN --mount=type=cache,target=/root/.cache/uv \ + # !diff - + uv sync --frozen --no-dev --no-python-downloads + # !diff + + uv sync --extra ml --frozen --no-dev --no-python-downloads +``` + +We will also add a `HF_HOME` environment variable in the `.env` file to make sure our hugging face cache is in a readable/writable directory on the cloud. For Lambda, the /tmp dfirectory is the best place to store these types of caches that require reading and writing. + +```text title:.env +PYTHONPATH=. +# Set the hugging face cache to a readable/writable lambda directory +HF_HOME=/tmp/models +``` + +To ensure an optimized docker image, update the `python.dockerfile.dockerignore` to include the models folder. + +```text title:python.dockerfile.dockerignore +.mypy_cache/ +.nitric/ +.venv/ +nitric-spec.json +nitric.yaml +README.md +models/ +``` + +We can then update the `nitric.yaml` file to point each service to the correct dockerfile. + +```yaml title:nitric.yaml +name: llama-rag +services: + - match: services/chat.py + runtime: python + start: uv run watchmedo auto-restart -p *.py --no-restart-on-command-exit -R uv run $SERVICE_PATH + - match: services/subscriber.py + runtime: model + start: uv run watchmedo auto-restart -p *.py --no-restart-on-command-exit -R uv run $SERVICE_PATH + +runtimes: + python: + dockerfile: ./python.dockerfile + model: + dockerfile: ./model.dockerfile +``` + +## Deploy the project + When you're ready to deploy the project, we can create a new Nitric stack file that will target AWS: ```bash @@ -255,6 +508,8 @@ nitric stack new dev aws Update the stack file `nitric.dev.yaml` with the appropriate AWS region and memory allocation to handle the model: +WebSockets are supported across all of AWS regions + ```yaml title:nitric.dev.yaml provider: nitric/aws@1.14.0 region: us-east-1 @@ -265,8 +520,8 @@ config: lambda: # Set the memory to 6GB to handle the model, this automatically sets additional CPU allocation memory: 6144 - # Set a timeout of 30 seconds (this is the most API Gateway will wait for a response) - timeout: 30 + # Set a timeout of 900 seconds (maximum for a lambda) + timeout: 900 # We add more storage to the lambda function, so it can store the model ephemeral-storage: 1024 ``` @@ -277,14 +532,10 @@ We can then deploy using the following command: nitric up ``` -Testing on AWS will be the same as we did locally, we'll just use cURL to make a request to the API URL that was outputted at the end of the deployment. - -```bash -curl -x POST {your AWS endpoint URL here}/prompt -d "What is Nitric?" -``` +Testing on AWS we'll need to use a Websocket client or the AWS portal. You can verify it in the same way as locally by connecting to the websocket and sending a message with a prompt for the model. Once you're finished querying the model, you can destroy the deployment using `nitric down`. ## Summary -In this project we've successfully augmented an LLM using Retrieval Augmented Generation (RAG) techniques with Llama Index and Nitric. You can modify this project to use any LLM, change the prompt template to be more specific in responses, or change the context for your own personal requirements. We could extend this project to maintain context between requests using WebSockets to have more of a chat-like experience with the model. +In this project we've successfully augmented an LLM using Retrieval Augmented Generation (RAG) techniques with Llama Index and Nitric. You can modify this project to use any LLM, change the prompt template to be more specific in responses, or change the context for your own personal requirements. We could extend this project to maintain context between requests using a Key Value Store to have more of a chat-like experience with the model. diff --git a/public/images/guides/llama-rag/featured.png b/public/images/guides/llama-rag/featured.png new file mode 100644 index 0000000000000000000000000000000000000000..37b0996ccd9a90c88db06d4d8b7b2ae031b4af24 GIT binary patch literal 60745 zcmeFZ_g7Qh7B;#GAR+<^A{G!{se;n0bQP6uqezX?l#WQKApu`SML(A%O_WCFA45DwhsURL5s^5 zt^oiK_#+RncMtdtA*uWT{Kg-0`DPdZ2#9k2aRIq`N5Buc!meF92UPr!nCD#YYi45x z0M)4ij9a__utc=DV0I&tYl+F9=j|9iMz4hbXgc3FJbZoY-Ad-rv7@rGvR~ir(ezZy z?h0KxAYdRY{KRt!V{(+?e$JpfXC-VZ-yi<|0M7~3H(n2 z|C7M~n*@+WA2>lDD^^rjk(`j(+{0ZI9qR)&sLL;L;ndkYbNHIS+vHOB%oe!h(ZWb$c(b3AHzS zr2)9q)KrXDWaRe0fbw23&Q(~Eh57j%J0A8Tin$XJ9?pCODDVI8Uye#jw)OKMNFsYC zMvN#iYQN|#<>;6g%qi@?gL}byk#Yw+*HsmZ$YKmU&`)XAB&)xM+N+sVVi}Y5xk(vG zlKyeR&3!9{sO7?aRp+k!-Pq}Pg@VX8_2G*S2D2^flZm-Ut|5K~NF6zHByxo3DJW=u zil)GiJpGz-=W-UJceqezv@JgoQe_1OPH~8`FB&M>V zfySL+-?NA1>FLRyln*6`ZYx`hOesUkfLF|VZ>X#Es;D)6V?E9Y0b;>`X zn1!t`iW}&pOdscU`sY7pV1?y*h@TA|lggcb&r@2kqwiPKvUPb+I*EoU2g@YUtBXmJo^j+&w=Yl4IO_oA z7XF_-0Kg)^|MwGMmJ_yCzBE!^zX=AAU>wm#o>iu{EB)q2-pIPk4A_*ctKq+DFC^xet~4*U%GrfGUWMtu2eMhVGltF?Zb( z?om42Jm2*~hSUPy-^mLgwU=^y3&x?7UM!Y9Ijq&<_VZYK-*s^tNk(#UuOx z^r1Q;FX6?D7o!3yPj*CVag^3nlq^>GIjHe6^UhQ$(!}uamPud7sy0f4^be1wCP`pv zq%q{zXAcLZ3|bCu`8lw7zz9H!k=K5GiQnNSo@w+V3p0}fMW%*6UN_(Ym+>{$ac+KP zg;m?Mty6QVJ$9&$sxY7o7tNUEz+c>x;s`*B&&Qg=Uhb#=LO0nlVMnekP13#1FL zfRAxQ)+UX{KF>As&wBtU?KBhZjk~FqymLOTWxVZf%J~Y-&e(6VTtqEHL+frS0D2}e z9a??k$fNpuzrR4rG-Uy#{b!ykL|q`ho~@Wo9;^0byQ;9vyX3iy!xx4=>IO~J&PRdo z6r&hce$HceXg*MCgOr`O<^rk;CMN@Q*m(t@(uRNVut^6_=UM=ad~m~{0W3aU=7**M z9j@ot-o_ZV687u`2!M4S)_N;L%9$5xx$w#cB4Geq^5MK_qA_$j ztYDV;Y6k_oP;~%z>FOponKB*rrTKy|Fr~4xJBlH;dG}k2Nx8;nwHa6{KsONafas|0~w8?p8PXR_w5 z!`uGt+MPJ+zrf-le((i)(VNAAmuXh=U9CT!X{jo59pH(N%09_o#nUVPre80G1jhHW zvPM7yy#yyAvYH}8-qQo^kPf9=ywM!Tg_g^30jJePtxOfcY7Jjj)aVr|24{f)a<~SupPr$fMWKl9MI%!MtfFC@2WG1Wp@(e&j?85WzmX7K>@xo~@C%mu~~9 zx*!cei}c{T_c5T*oA`HE)f^oi{&>8pBi?0wi`!)EP$Kpsy)S{vl(m3lAABjL>2ZT~ zpQ9NX9L!kkGspi9oPZQL0LXg)`4l|95ck zKGeDH_UWl&5mf+uwlL`W8u2BFjX&MG7p~Th##%Vt|D?J%Av21hTOWw0zAEExYisMY z%hOx8r4*QW8-HPHO5Ce)JNZ+V%l*6^XdCx#-lZ_P%7j3-*Pi?N`uvt>X9~PGR=d5JB4<&N!s(u_}JIOo0$^szK5I-c;1GUGJj7KY#;|2!P$y&@Oz?o3+>bVq58_gi$ecT7AH{#ywCi ztGX6QQhH7dq_uI_58fWjA0ui6TCHQ+PK#y`43QfIs3~wB5c<_?n#O1LvvDlm=K|f_ zhnDEGD57^)4r`M9W)6{E8eI76`C@NZ4=Gy~TYC9#fRF==z%m&hS$dSFGrV63Io+8m z>`h=_t;*miY1*e2(cP(5?ul!G~itz_)1CuZXZt{{V;K0- zSoI@Uu3JYYOfj9$s5n?xI2nxtWrTY4-!K#MkdrSRqU5C!4_wW{c-Tkr2i9nPp%JK2 z9XZZ=#J(N^%{IY$4S`>u?(|Uk``JZapek-j;^@&FL)g-ox$v`TCx@KrPtn`*{Gi!# zjN=&?d`k;f>4iEit9v=0+`I~^!$v&+8Tv@U?88rx!0}JsOLP5igEhq2b=Ma5)X==W z$DsA7hhECySa*&$Fwi zpn?;iH^Y5o88x3S^fm>Uw+9D+A!6~b8Q8yp08S6Au|%!6Tfve?kM8_aLz<0B!foYr z;$N^UZES4d?RgRAwjA97A)y>cR}vgST;lv{n$39xD1_$!XR+vNoaG(m167{`H+AcS zUExY$Uf$B`f}h93px9VLVx9l!(W0Rt*HpysCklQbmAf<)SUX+- z1Kcr+?winR&)O001nn;jL%u#7{D-Fs6)w=21}fSA`Htrw(7za7HRG{`{KSY{IG$;C z;A>}!Bz*-`1-p&k&qRGt-!@OLS~X>wEn?p9-{){JPqT1Tyda94gFwg7R%*Z`asLB3 zw0E(O%GF-tiw^!+U0vOPVs;1_ox$z{i)m0cA#BU^K$d-2_1Qt@-S@i=6uw3D`9z^* zd`raYuMH;b4~d5@rB1`RK0m5c*eN``7*5Y2$^emo}j_{DeIH}!9 ziK09d5GRh1NbFPXB|Zsk#2r|K$C94r^Np8)G7nhUX?Iat#d}V-BXV}qMD9mKU`i-3 z(LXC1G{WiZZ;XS87v)t4fSv_1D!5{agdque6(>n7pr)ylp`DU3!7a-=B{?BC>zvDj zb1?^hnG4k*&Kql-J-aI^&+j2#c9uHplcu+-{y>f$c!I?LxxKwTk;mzuVlX($yCfXx zF-(gk_k0)4Z?IR&@FWXQ=!dL z)3lwEBaGz<$-# zd`YDs+oi9$m)_RC#HXS|BWZNTwg-rVo8Z{Q&yjMM)#IQQ9}HVh7T4c=3zlmPzuDw3 zU%pJ^OxTRdZ8TddAndpqIbrAaRwWPiQ}ky4rRkBevGVRk4Yo&-E9NR`SF&2nu_&!SSi2=s_Ek&0ft!%=S~fJEJt!BxbXe zMu;Gt<5q>*njK(R9(fd;wPQaqfvL7SOZdU_%fvTrU!ift%A_#r#cU6;hWz!3Xd8Kl z)xdub<2VAFDn!{$6{?fovL1lg8AJ<$8i?$36u#L$a_YunpLjfcV4ZKAr}|dSIO9i! zTMJqXuMftTb&pgj6V~0ZP>jA~R$189L&+n4b+{m6>rOV{6!h8xGKm=68a27MOl+&f z1g#RKqWT32VwjF6wkOzg%#Ve;bCoO_o7E~yVGXD2@c(oR-lW1hwqCvAi;3B72p->t z$G`$G^cRV2mZ(cCyTO^sL^xVX09o%!OBem$zI}@bR9VULzD?hsjoDh&6(=cVy|B=j z^eLF7^>)A$a$>777M6|I4Nfc5rVz?)6DiN{W=d-q~;$bnSt#7;#b2f7&+TJf-_3YE| zHiuz)1sX;nM6`iEyFK9&x2&+z7Lf{oGynnjHI(5V7J4Jou8+o)3!d{&nD{<6HddK^ zq4N?y%hx-Ov7Rctf4|{B*nQ_Ykk%>uMD*P2?hnF}(Yue}^dHn7aooL`NLkvBi8sxB2ywcQ>zh|E?sKrGsmAIW_d#sHs#!QZi8PR_*u*Js21o& za!nJo9-$3F+fXhCX%0gL)gG`j-Wo~=StS#FwVw@@?v>p`+u0>MV)Fzw#aqF&A1y15 z6q3pAT7x@n^SYe}qEk|JnR?vv{*ej!z=|hXzUV#n3q~5Pa8!Xr;f+doA1liZ@nx&f z@6j}SDS|zRW1T}<4ULY{OhAxUnDl0@p>O!s%?@jgX1}n`J8v`EoPark=D=LpZ5=b@UHrB>0$DFR|GSB$$SW75 zRc#c7{1!~HtLj|8wL##P#(x;&A0*`cP&2-~86+0i{!}SVM>>x_C1=d?jY40ChnhuK z*5=k0^ULekuPKnS5U>CTonYWX&3K7J=^4AI2}&%#%@xx1Qq-^LGoeQ;+MGo}@}MBP z?#Sm<&uXhk$5NPIZDyA&&r*JTrPn8Qxi*5^_dKiMrQ>rl8TO05+G8<9CYVv(U|-(0bA z!!#7raz{t9J>j5ar;Kkdoo(8We}*6G$iY<11EEE)N2#1){-X%l=0P*+vjZdwL5b^DnB`8vz?UyB6^NOh)omP`tbn+81*yFNLbPzr1$G>X z#=dg+2m^&>tM2aZx*MyhnXUvB=umgMLPO%M?HclLl^kOG#QxxJO`oBqtcX=2 z9QM95_1Uwrl~EHfRx#%DQVUNyxn`SFeX=hDWqU(wj^?scZ5H%|jEd3>zqcDLo-&NK0E9Gaonl{4X zkr@oyf^RR0RqAysppz~&zq-o4&Q;|Hk`iFWxW@LOBr?oF+E{|J@%BdVYU~PrNC60#&I~O^?IIv9hxMMB($pFipUUjHk7kQPpu_)wXu!-HZtk)H$T67&;p+pxh5y^P%$@_wHuj_}gm}9GCSZ$NMyn!rdZa{w9=XEr zY_cw<35E|hSk;oDG<1ZHiD`duoi^6S%N*Y3>G<;7D<3(Q^=;_osHWN>VWIs&QNul} zyVbbZ1pyD82uYD>E@+0@TbAzlh5DxLa|RvWT;D)b4+VV|!;3As2;z~qAT*UyZ;?tj z*U;xi$!-td=SI%l5SkvwF#4sI35{Wc%Mp_>dQ5O+zd-PuppyIKovv#kNs#-7)op#Y z+{N2yYg_B$l>)T%P@j0~FF`uHy`hjrtu)#EdHyw^oW1O(sjj~6Bl;DuPZwaa*u~?l zcCcdi1og*(-asb4_2N$wa|J{uA+(j{2j?GWse)Vy$@GhwOEJrY%IMMa!iJl^@?2|@Tn<$r=b`ftCuYfw<>qeaPV{ulsvX^N#Q-UjXVsl zfD}6&@1bLX%mkF59bG4Jct9N3DGUFkMy2iOT2JBbVX?=~b1?2{@$e}u*A|vr_{O70 z?{wyZEA|JB9r*X^pQPLkoy|>$hG2}}^}15(=j7v$Kr*;ycFUzckg(KQEv_Rkr*kz% zTTNy&L*=UTlN50iLDFGi*;Ew;;HfM!u;}+rfptDcnoy^LKYb;qr>&uF?qM)uS-^cQ z29ynm37_q>F#~J1x<5;(r|ygsi9{0-z+Z@y>OwkluqH=9v@8dhvL&R_e+Fy?X^N1B<@zq3mzP&~*Fp?* zG@yV7uT-#chOhz3EI?Kgz-GK7(up)`@KSjzqhgf6OgVF0r}ciwm#GwVe0&`4 zdPdhXma^u?3aXzfT_wkK{;if)r7ovU%Cf9G8Coir>~p?kD|bJ2%j(FZl_N85-xt2k zG`8gdmsOR$-OpPsJUBnltfu5fFe>>?_lv#fH`RPRXq#0T%b=MA2^q>^7q&X#aJY$g z+MZ5cY%WM$gMChQ6Xu}FWW;hxesJpy85I{zBKofs^O_Am8q*f-#rwpGiEg(<;#tZm zlf-XZ_1%T^P98dSJG#!VXH8Cn-I!sk+Qcy%fYhF;0wax978%?C(2F;sOdN2WvijIs}e1E*B9}=tEq$;=khlTgspi0ig9`-H=U+gl`u@M9Vcpn z^*KU>!M(`+*J8|I2SXO3%Wwal zcY0HNB-R=dn@{^yXpHticM5hVxt`-SrRL3)y9yjoh}zRHzwH=58_jbuRQ7dkZ_hLP z1Cf5vSBTMYT~h1^p!AD^%^JHI3>U!y$nb~Pw4uqnU{A*#B+z>7%?>Qhwc=>Zu1Que z7{w@L0c@FMDF{$ZwDk3xKwc=fER0M!22!{Uq>ao5Ego#batn20K{fCwu_@}#bcJ7? zJo@N(We)VLpO($Pk|?$+?|4GY$s#1_;Z70TN=zJoiJd;}c{4zi^<&el?7c~0ehD-< zH9XJIF;C((x15_1!ARFFqK^RPBDSxTeq@LxQml*7*MLD`z;+iReEM6mI1jcN3^Ps- z3L<|$aR$5T2sLqp?UL4nko>DGhf?jc8X6mm^xoaN(&Jr9Txm{>fknXblQtK{*&_cdeg05MiaL-izeLw>Fgy+|j06v(ztN+)AHeDh=lrTS~uJUh&9{GYRWP z>=+NjdW!b2O5h8K^}MqdTC$$3>f_9YCN3OplKg(po$^>*jefFQOa&nZ1vp-8MS2#l z!P!9x2;gst_{?E7GfL>+98>ng{05K>TlTr6S$Z@Z=ik8mqzW9CH54z=qsl&AZj~uA zB4tH8x-Aggx^KsnXKp)xa2dEma)4yZ5aj1GT^;UX3+}H(lyc8m9cQXKh+M8WoPl~+ z<()o1*8!uX^#oCtz9Ro<=Zf2x?_rL*MJ)HMG%Z%BaH(M4QVdt@dZ`;LD=U1w*k-Vk z%5O~GL!bo~29fR`2(*DS%1l+hIamNO-75;)@q-<4w;V5SNgio5PQz+ZMaVU!s^K^#i!<-p(kSeMQs-A;RDZA#|PwN zVM6FO3u*Pifd~@M*>|*pWj4t2*s$ zw)wy@)1x*(ytW@WTrvxeZ%~rtp>0>g@2@9K>zzfg7!g5@zaI7LpJ4MIpUgLkq4JhH zlwKNXQrnLUAfcI|=*u<~)_qK807(s_DWy*L&?+3f#bsF#yJ3OfSRMcPWc|Z81^J3{ z>el55kHT8{Z-J-M6iMRCJCa{=I<(7&%FgGHTXOYi>vo)&&@SC)&_0rI;AY8GrnKm{7hFp2!kVAC|o8(d%s=f;Kn+bIjUb7!t~9BzNO zJN8MHhr85a=$`pGcAKOwevy_-7s@6TdK`N7st~a>exBQm*8V!4g{LZxAu{+P+X+h%Fr8f|oHmG*sx zHK?0kx3D_4S=7t)fauI0h(vFr6}Ftq(YHcEMn~Fr9l-YbL$Ig^oT_TqNvtG^`b%kd zxiCKUX6?-pAYR&DMXxbwk+-$9O_yEC#3_UNUF&PfL+gOMMWr z_IjG7)iEvz;IJPg$dC`CF{g>~TJdYg9QHYpR$5lk7Bw+zndvwwV$86%aO>K86tOL^ zXU(c_Zf6h3vxs}}#aBWC6PSwEU>QKV;Pb}uG_b;jnF(R8g~tSDiQGUJTV;=pRXctE ze8RsTYlzU?w|oz%7dNG0r=}_zFwAA6LdGA3OW<_M-=QKXao`6#%P&>7jce&%JVwQ- z!CFMR+3wF+3u*0<$<+edW|QXI%5iXD0;RJ_!;C$n(uNnbKaQ469On93o~f9=zv=-i z!233h=yIMhnIumXWP*%r>C;CzR8=rnd`%33>iObW3UR-?9Z9fGX4?bq%DKfw`0%&n z^SYypTQ+X?Z$+|ANu!}h<`q9(=UGbQ^7#n@^J6iA>^4I;;H66wy-R$Pj=F&q1H0Sq_7!GvbF|C*P7_DIl~@(FMvVd7E8iKss>w$B7dS$P)B7H4-> z(%Fs@CUJ(XOe)?b!Sz@b$^SS#hbiv8J`BXj-aW8w8}=i3--toh-shy<^7M%1$sJk) z2~PVaK6$Svt*}x`%){V_%(~ zJYlg5VVs6@Js+S+>1#x>JB?u%dSR44UBq<3Xe+rV2IPr!S_6BkHXabp`k|q8YG4Ei zftMC5N0niJ;7ys3`=tl(EIX(^zW+xA070EwIL<9%c(sMivH&9uFNg%o%y5R3UK?*wMAM;=v1tu)P65Wu!#&IVkHz1;~9!??{dV|Y{i)!uI@gY6M?hpYFG z-TeVmLB?fNb8l01`e41TwJ6$`;J7dx`grAuxL;O5cPBSK ziw9Q>`Jvv4HE-ti{q+8_LLn$*9nLWhu>m^H+*DM$nfdb%R|yjiH&Ee;w|NYbDj*M* zRRtA7W|X*Bl8hYJ3f0X<_M|IDo}XQ=jGa-A$fdiV%I;nq{3wD#Xq%t zEk5hhko1U3eo%j>F>_vk#++&YGK)(!BKbW`*3;7MgzZ71_N#qy?`s?HoG?zP3HUCd zN%tv`(TGR%s8}doTQ4P>DP8$aYV@X4lfky;f0~!QU98sG5;roKRWfAwFIN(TGpFH`J`>ty4%Pouv2NGZg#p2g5I z?o`D8ZsILY8&&#YK*i5!bT=jIXx<{}M3PFf!$rph_ z8clRu6cq=-c0Te1KuM?WX&eradfrhjxQG2UAZh*-a?7l>`Likd3XAn5pxg8p|rR1$p z)sWOn-CSE+3(jrvDLXs{w!Q7Z7IEyi*eCF!osCJkBie->9s%Q@W_l8$G%+5tvNso2 ztH#;6r}NxShNMO@{0G9gn*1x<>IeKf9)v2dq%}&+spdUR&jFHt7dSj|15C{;o?04` zviz-2=I^vl_nx3^o4A)5|u$Bn*! z{knAJ;QFeSNF_)?D1&oEK*e6sR>a^^zys`;Pa!J&(xwZ@9U)4yBW9Ocj?yJfj1pY3 z6h8u2>i0q8V;VYE79{XXg8~6ErYPXK9!wMxaPqUt<+&L5QBwq|;6rdrxg!{@xH!qM zBKw*LGWGjs!Z}Y+}Hj5BReb@C^OsH;vdU5mdvSWoNu434ARqHK4_9;n*mN@%!C8+KfdPCn{J zI-Fgp5wGzJCUGhQHeL9l%?P!qx*XG~bI>}trn=3@E=sj#vcRYi<_3x7HQU79G&NDaJ($999$$-?SL3fE^aEK29e*zLZ@K|l0YYLCQAsvSs5#;{D@teO} zDI1W*;=X@o=7O_B=c_6v>whi8XSM#3YY-F+n!H&pvr^5El@Tra(7xND8$98Tmye@1 zFN&-67@yfHyc;Yfb;tPM}^r@~*M zJJW75X8AWxj?Dne0?8*G#h@w?f%wL#mixVWU}D$jApEku;LOm$n~`$ZSL5hA3AJDW zxVB*64lCy;H_R(iV{wCV6HiFIQ&&Zhs`rxyZE#H;VL{!FN~8)N#R*x+Z!P`TU-XVh z%t^K0463+1VRr>wv^+SR3;KG!h_oPCeBek(>Nx9bK;^-*yD`+IkED&?F78(<9TMTV zEXynQN+Lxr@_QRZw+VOJ+}EN4FRXymhl{T*`_>~f%8o!)QC?}NG1$4tIX8LfT%D6Xl_>nyBf;ZwBv)1^`QBty3OI}+U~RY_yJ5Ic5rPR%wZ6O; zS9BXhJ!m#OGYb}$-ry81RB&NoA-ZNqWVa$o!soZ_R%)*zn~)!2NSh2hPhQ7BRYVvK z_`Ak64=Zk_nrjr+xsg|2o$f9Cc_?JL*@s?Vsv{gUrgdfYnppWeKgiP4qu!~`ub-Yz zAFEY{^fvE=1=T-!HM`5-`>DYUZb%`{aJ}Sb5xzqTh8QJ0FVaG8Tb8PAa@*}b+S5P< z9k%BgSlsL}zgioG?u;&IcuUa{H4RjAsf5bA2ubK+*#?z)5xL7Cu4u65mO5N!?nld9 zZc(ZxUg?*Irdkh4;&m7*IXST3jy3W?nkli+laeNj6j<`b-&O78`!GOh8$pauYe$?Z3yml9j$=4_Td*@|tiuSQ&)%!${s;&T~APhD*3VVEQ zAkv3j57yHXWc!#u&P?5^m%`F{{GTqrLG)=muI_LB{>su|wG)hZ;Y4ST#iYR9lPKpQ zAK$&Q%Dk7GKOr0c{ns0FCkG#+8$8}>1GboO)dn=jycp}FP zE{d8$lR1NCQn>`F?h&bRmn{2SKbB;Tm#>E9Wi7P*XZN;+>()&g|>oE z>8wC#es5od4b8RESZ?8@)mC4euk+Abab1-(pOXUt{!2k2763ih&?9Nqe$O7={ovSK z4`+4`TLN|5^ZcPU?T!F&*4QIuEzfa3*D)I>bTwzFg73q=2cA&G#mDMZNo+4JnL3Zip_(*i~fphC1}Bt zI)8{_H?qZM&zQ8KEg!{q92f5K$uVg%soSzK0TE#|&-+|QdVj_~*}ac+f<7hN?-0QT zpabI{2vBzH?sY8f^wd9P`>HMm5URY@0v>Urb*5Rcuf)u;#0hNzWnc;q|Jd*DoJMML zteJG5Xn7PTKx(n)3AmMzfA0AnmifJNvRBNCFdrhmxP1bR5jZWwS*v{qm0PVv7<@EL z32VSpdUa&VLnmO&Db?lHFwgMBb+m2s3hK;|#Y4=Qjs}Z(@38<1Mo~lj&ATk5;O_E3 zJlbjxj}Jvzg!chccX+f6dDvZOtpQ=3e>=&_hcQqGQaiiTKiy} z+?tY0bZOi1zfppdvlmX^R>GdB=|2#%)j$8fd0}e>OYtvgDv`mnJc_)T@{OYw+DO5# z_J)*9p6#$eCBo)OYO~@_s_e%LA?=tZlN+!)2!`yavpDTk6=>&&!ue zkOeJE4&0A@$2ZOE$6Piut+f?x1fDN7A7|R7i-uMO4myC{L?Ba)Bi53-#~%S&?$zJS z1pAqHOOEjy8T@b3XV@;*w7f}9kXXxh(} z0MzTeRn5=77xU-BMQ`$f4e`cLY`wq*-v`Tzr96-AKfd;$8|Unku31XY**)1I{`TP# zp%oF6GM3k2e^R!xW5qKpIDjtpxFmdQGoba-%C8o206M;QX)mbxoa$d+K2<$J-ry=b z2_Nw7Vb<9`SXl2572S$?qf}SBgZ35SBAPcdtTaqWa=d407P^QG*k<9~%KOKbaG!*G z^%;71x#q}Byb^DEZ57I2pxC~SY~Ntq#+=UuEz6oy!j3%U_A$`Kd47aA*$Y|9_|;jv zN=`gwN$>3qX;Hm(#H!|K&Oz$CA=045nh{N2b1B`o+mGTqu~_ySn%U+!kya0Ux3TcxGutd}NOK^z}JVFXMAHGthB%!6gxji>_pfs4!gAWO;DN6)eMa}Yy!M~)O$X4-OL8dXmeEgl7TyfI$Y|=~kH0gz>|7fqQ)LElE)q|ItS&OoZ>f;?q^Mz?UHi z=jN1&r;mg4&`P%;xrN{{m;8j`ubI1PwKq@bylSvvku{sy@N*DoN6hc_Syip$<)X>2 zw1>rplvP?`R%IZtURdvugEkMPGEho#S|T%*B-i z`FNw$2po_BG6liOOt49x>(Uog3N#ki?wfk6fw5%=S(s9?A0&t=KYz8pR6TpKJ)CTTu-IPZFzaIMEz== zgNPV<|! z_IW|RrsfhFp?&IRWCi7n?_Uy>z0p>Xi75mIYHkp<^l?_$BxbI^n>_T;xh-gx!ESn}ji9vMBcGZ8^r|F#N;U0)?1Z)vT-dX2iS6;j7tUIv)Q zcMOzdi%h*g>1D$^`0ii4>%;iSg6(i4tsCjrHI%Y0U3+@PH8W)~I;GRQ6J{&~5_{kj z5>PeL7jc8+!2ff9$U}d_n2qfmO<&>E6!+}f8TL(Hw8i8*3R4_CBM4EwH5m5qck14j z8@I|l>6OuTsU_bh6Ft7mOJ;dzf+sZddN&%AjKHIQ8Go_l;D8S@aW+a1oP1k(mzRIm zB=ncHW{}Odv-4G17v79dQExYN^c)jx7UP;&33-i)35Rumep*soEC!?1a1d$E-^HDX z1-m+houLt$f>3Wt$KWlAOWz%u-4feGQF1~a1~~qdZB-Fy2xNWIhTIy z)*%zx)@<(Z!uUvc{6#C+%CkRHW_X#0^W5Me(7*KoAD?#|KVTp@bvTE;5#M;vRXW-- zcKMX(#ZuW`2aBO{l{4EO`|<}@I?UPimtMCir+B8nd3*z5N7{-&|Dj67{3=(Cjk;bZ z|G8Lu+-=dpT^6lSFsHY1J3amy+2~_!<*htbZK$L8(DxdepK8=O(ccZOd>6`P!fu*> zCj_TzD%4oXD#pB5yf?#-)=!^ZkT;}lybQEEaw+tk&AB4)9|tEWJKtsZ1&E)=Q5`Q6 zuYkSi(-TQcO!B?K*50vqw}wzm0dmJPlvn%Qjj(&Mn=`8tlE?FpNd*_=NJwGS!h_!E z4;zNP4n>E))(`h73s*Wze5p_v@Z=@&2r%{ZuN6?1v;fgA2_11-?{o+#h*wbajREtm zrKd4(2IFFmwr%qu(SQD!^&6-rXGpfB-ad2nNUo{T%XngYpRQ}zG(IO{=5#?IB{Ck4?j4(>| zltKk`#ZGmJgP+*4)q(lTj`qV8h~_;Ly2hR;i1ZUCzGWA6HLhC|GM`+3?(@v0-i)7K zfNPrR_!%o88%vVs?lsn#kN-J&V)u>9cW`(71b8=PVJu~{Q~nk-WhA=`| zDP%^^qt`dv5gKy^-`aLhmT#;6PE!IlsyB|_JK#K%!S6P#qATL@#AYn`^G#gx50rYe zW}f>KNcypCeXi#f^2Kt>osK2h)G42RkI(Gb1=|&9^l%zF`O;}#MLa0^#{^>wp+@3E z`S_`#8NMK5d|UrvdZ$HrJX{GMqW&pnW_$tQmk}L8#n}8djZL5I`r%udSZ$#=Hobgz zi9#>9aMf!-V1i;)=~qvDn$mFN^W_C{q>DXtfID>3&-!8Cx1EKlN7q55YDa|4_FO6{ zFdjN4g`kdAOBrqqO(PJ0e!iYfEsR|F9Hk#y_0J7{0GsgF(ZxwxD8U83+0e55J!=Qs zm1+#-fWVpqHypCBSn_?X;79I^*dG+-wR)xe{X+NMFQ#^eqF4Fq+kOR9l2Se|GJ4bU z8l8r5|1N$1HeLaahVVVV!SfX+lED8huvhAXzg(jwAimgA#80`+?xKo!Y3}V{AiS0L zf9-3o`h{-6+eoSTRx|knKvabsvlTvoPsj2w>Jdnz#B!l4^?q1;0a*)?td_lBK9yqUVxw9-7=`wM4lYb_B?{b>@p z)O6nNL=<)Vxh{N@iHnX-OvJUk zRh3Bjy%muKRy^75h>y|a#VkoV;lNoFQZj?CqE}c4c~!hxxZd!0N*DS8Rhm{oyIQi-> z@xxKNxc$fGtOL#eK2mmn{wS*85j=4C4`aec8SFFa^j*)f>lEH)I7b9$rNDNS&~6=9 zW$suP5C|rbcV_!kBUds)w_QC0Ht{%P=H|4W5p^icQ-)+5wJkl&1*mcA#f+=!@tr2% z{G9$jPTIY=rHq8SZ&a=^lho-s?y3ZHrDW-zx67I%BoB)c=Is-43%AA1ExaTmRpuN+ zp^zYw_CD)<)U7h#0{^E8`5mgwdv^)ry*BkcCYBG@TltXs*pvbMw|`-WOl;|EE$@+J zbXRO<=)kfG-1Yw=>Mb0a?!Nf(fux8ih$u=tbWf#}MnoBtj!B~+9a6&RcnmtEq(PL6g@2225&x05}T zqLqzMe*9a#_D?8g;k*!2dk%W4o-4ThwL17DYpu0mFjYk2Q6A z6b91P?iQBYlzl1g^toT%z5C&Z6+O}h-W|WS4HE5wR#&GxvF~^zj=y+EY_bZq%f*sjo12w{auLBpV()UGk#sg~)gGs0Qc2}m^x z%}3HJ;`R_B#4E5GrNTJ${KHGfb$h~lqAce&Z5r)vZ~J8wvI-{fW~m-`24e2(OI|}w zuk!98(wI@xDZ*E_9XxE1xp$Y%Qi-Y9Dc1eM> zJ>u>iboz35z*{Y3V+(n{&GC*m1n0nan_&k`>-U}^DUGJm~jO#(n}`B zQ=Huhf#RzjnaNMY*iN0}@h_vQP_qfP*wcmxxMZh6;%mW;`mU%4eUvn{_Y&E(%M7c+ z(%+g{k4@Qio($X$Q(WPF$qZ05SW=PDr~OlMAyVJ{YF&v|-Y8*IS7wf#9Rs@|w4=i> zEB$_kkSL@lwdo%nZPouv&*Bv*-lMio{F1HhKytV-2y%gQvs>tv$H#}!H5Jp!cdWT? znkU3se`M&wfz;>30Hr@(l!H_hNoXHZ^IaXJ^vG)+X%2dT)E_z!?%W>vWQxW+ z!=b6&@5iN{Ud1(sCGJ}L1UE~hUUOwHy6=~Fa=+hFw55AV7?}Dtdcd}oJLbM#o)A0} zL=!Yu^T{49Ip<2-w5A{JW?**2pd1oH44fn~+O@i(N4`xjgyN@b!jH>G^U1MrGMZ?> zcQb{!8&3zAx=H;@*;&5rQ}qf-IVnH#qZ>DYW~fc>%IT#c5<38Ry-G;t(WR$B&haCS zYm-|;%128^SQHeRWj*K4wJbTXMaC1(bh$MB*iS2FN7URoG`Q3%J?2%fW-48DRHZ5D zFn@034Bs`q{WCfv7rGuVGoWZtrF{{@lbIJO`Bdy= zX!j<^Da8QqM}FDk7Wm6UT{da;n8iGg7aLoAC(eVV^CN9HY^K?u-R?SO0HAGwobpk7 zyBiIKHT*JA+O8}?hu<3uo04@`?K9DJbyU5n23#S9$~!uth96yMKIEj4P1go=;{@)|$iyVvQe5ZQUU-{*f!yr zsia85e-oy^G2K4XZXMTkB~uNs{j%q?gaO?zPbleH#O z^;eRYXH0}EFPDG-uo{iRxK^T%0xN$AStj%ZHA#yy5YJCd28!7y4VD=TZH%%5y5jUC z#4>($)y#x(Q1L6k=bEg@^L<)wn%xwF=%rWmDne|%gAv}6 z7SfAvvo)t4+s!%DW*=LGWQ>qeIxG{)USKpCn|L8@TYC&#Yn`#=+H|lgbgs|6f$0-> zt$k!js5B%T$>1?gsn%ZT(wG?{%y&fLnx?kHBikmqoyUa)G zpST1Mb}n2Z9OH(R0~aSJo1y58-|m5Xt!G?n4bDT#i_Puk)q8jO7N__>9YQ2Ehh?&_ z$bU1x{dU6=$i|kh8gVwJwV+5Me;ZZ7ZT^md^Anit2mVX-p1c!w#Rc9yzQGQV1k263 z$(O9W8$9WKTr^kB$<=<c=7@u=z=|Qi2YcMWyxh%GG zH-r(Pn>6}*HZQV4Fq-uT9-tcX!~cjTKe397=g_R}Qeg?0KF%u6U;|8FIz7(c;W7_# z6L8_%63ui#_t-(P==b>PtaX~7Vx5_a*FTJhUEOpZzCF&y@2<1i>-s{I%Zzyz9S9Z| z=?igcP>0q^Kv^ZKMfO)hC1Sj-E@X{<8gmo&?mH2Zd(!A%_@h%J@Py4=V!>wm_~^)X z5*e@sKJHr12+ZSoaIhTc)pqU;Crv?+euuMoQOtgixGRzQX}ia~IkvsVd%0q(X36JU zS8mNx$bz0C>m^oO4h1U+w4$w?+w^Uy`ld-i1$2d=0Obe*nd$hsNqEUyafICp%v=1j z_14+zz7G+?i(Ef8P&plD>Ya%`>2-qR_qBOrm8^`)@u5{i-&D zMu)cMlRku~;?IrCv4RwkL$?Lrt1-Wyc~|-+82#s-@b47Elc-lPLJ4Wx3*Cp)Ps|<= z)osNcl&Abdy%eR2KJg{J* z*)wJ#rckS4!*aLIvg<)13P+czI+TpxiVK9fX|)H?vu%g6C%fe1nR3~(5xBa~^X*x| zVh$6BnwuV-we59bK^xtPmB+htaviHVh3y+T_do3(MW2o6c7~WBDnq*SI@iNrtUJ7~ z)8)jkn@#wdaQ%T*1T}h%;=eAsAyDpJd3zU0U0jF9mloOrPZA<<)nSU~vm4yiXi^JS2J~AvvP9cEUWJ%K7Dm7`-+iSs`0UK)=&h<8WttNaBgE?ImbeC z{iC(VqECB)d1MW43p3_xqojHDlq-Skk43D<<6J?K7vt6=zDy0Qq8K8eWfTunyFSFm z$aTWdNcH>!e4mS%8*Nr+kx;hbQ?~xy_%z}mC&05qZr|HLreA7$aGMxgVdgsNw%aO* z2UCgmNDkj?$3C~YoOrk@Y+=oijx=>uEw+j+(C78f;<^NQhNotn3cLl|W zJhvYNrVo1yZ+(k^Kvb)2ENm7@c0J zf3ChLh{-+S%qlzR?AJ5cf0QkXOUt=(GGCy7i+k?zoAuyvI&#+rXxVJ|rseC3Xi1L! zM!N%v1;y=|u8qFnEt{C!rt?4Vn%S&_4go{gXAy<j3g`sa<&4rfoG!bBt(j4~Ff(c1ZBQse6J|W4QA95Z=ZkU<&j!d(u+!~o)`DDg2 zJSu;W?@{%OY2F5D@twAGg|BQs3%hEqVOm1QcafP*jM2HTuWTKf^xKx~nLmI3^W7~p zg2k-2qo57CtFvORHG9qw1(gm*yf!;I-sL-jwf%|pRwvyep6pQudtw;1Xd?edkV#s% zqPYWy(Z0g8%wT`c$~u1(QxarvQUQKyrHhO|G=Id?%?aV<86DaM`(g3y6k~af)(`43 zNmI@=ZIUM^*4PO15jf+KHDxVuIv`V2)_a0$0cEjm;(4G}23eJBdPFrw17q`>kb zW3_I&;80jBLPo%BiD9Oo;psjP8bg=+C3nO=jm8YEdWR|XNaUm8&*csqeIsp=XKYp? zcduyIeQ6)kJ{uOZN$56N|0440#GQQx#*~v1qNgqS7)TI83B|LXOHRt*=PkLP3pK%C z4ztZG`tOu7&{@27vL9d1_nuG`<1`Ka7{iD>g(2MSY3x=1L$(4@2*8HlX5j;`U$;kn zttDfKMfu)&>SzzW{P$l!{}WZyO252trvW(RRwEA3mhYYq*pgklt(=P~*O-Par_$KoU4bV+Mf(s* z!=->vC?DNMGy1jK;iKz@;}U$Pgf=&%LCGiG9A~Y-q^?am^I0SF zdcd}D!X>IDpVf2YHa^J0?eHZLhHcX*jrObLQUd3pUbkI2-R}Jp**gyS;a?)G0_9mo zq0^#0<2C4rye{x)*W3G`Iv3z8xSJ!#6{q+B7u&S{3Da*v7t#*{0>B>Z9N}icifntq zol}2)s)wFX{MOmRAzgxi&l=$7Q)w#DJoFpN`-HRmqTgvs{p#?*zX{!Kv{&5!bryE;dp= z_xL1@E?S13fOSBSp}4UKdB}zCmCg-jTCY^v1~#V>St=$~ez%F2&pzl^{2f z{@Sb<%Th%RfnY;G& zmt+>6LmUi^Qt7^cZwHHLF6Txrsl2LqAY6SXz3{qDXPFai*iv=>ugN88+u_VubFe8w z#OtaSa!#=YdRh&!i&$0OJ1vN4G6L8~{TE6lxoK*QlLm`_k8$)p7O~_g{^GsihVESv zu(+B!$iCnMJ)ez5r9dsh2{Myi!l{?>0~yxM_@f=s7bi+P>3h0o3#dr#3 zkf+f=`Lrx)9P2soAkLM(M!ZlraU7dbMTF>U2-ONcaC%$vZ>A}ekUFS6`>GEQ*Epln zVxX0Gq_OeWBbm>+JfI|p=0}A%lysOL#`R*;)rhsm@0?2^Md*($*vG#s!|yrmW#dns zDsvcHhC7zy2(K!?YG`IC* zo?55$6l(F*05-UTnD`D5&?-d{Dr2$+RM8w#hZhAh(SiSaNoc9tP`~i~q zO2032b6J?CtE?gS$*u3RrD>f|_9d^B2O^27LK;fXznTLcxw7-$>*>-1yq}dEWgxA$ zDa10>&py)hkYw0CWF&@ovR{?NBc~1)!M=LJDQYT$5WVpk61DG%Re<#_{_H+@Pe`hhcu;B}(2y8MptxY<1FZ2>qWYZ~Tp z+=i}$zbzAX=MgK|Q!8_FOp`018r#--^DO$D$cNM*x6ZaA8VmsT1%LmMNHgbEMUf|$ zRRd4aTsaP+W&W~WkEE+1p;xSOx%zDL>Gm@Nrrl~CfYir#dFqJi~{*~rd?d{jOh^L~G8DJv@y||4*+Yi3AWkHz=M)viVUx-&|56Epnwl)H3VJ>nFv)|DTuEZ?{g`E!1t@y=X zf?z{WTKk>9>h@=~)H~~SrktdHFw;6V_C$2`l`d2g$=v|5JcDP}r=^Qr;3Nta%eP@$_H2yH@rJOLQAi;7d9MH~Le>2M8*eS6p)J zl-nCsmW1gZa<ftjh!Gd)agv)IAWQ35`V!i_n<1wWd&Jyu#~RDu@fb z&-#B0dn!MS=E0SSomg+uF!lf+Av>3Io0Y8d4r{puHMEEWq}iL63ID;0o%DYEE-742 z(FbOFHBtUBVdsJYYf;|ZK_LS=p_`A_R-5h-anO1rQa1NS0K;DX5X0jNl|7`+{r3{W zEQ7U1gTiEx0nLKKbFD_kU2L^aUy@VP#rGppc-J?WH&C@YLQ@%%=G9wli?6Aw z8ze5a2Xc9l*B=VpF%H0%AAfTB`fsP~wt?Zq)aQWIW$cPq z4|%#Be!1xjoJjJ=Ss!B_W8EQMFC*`hEjCL|A{z|m4cbJ5Ky%%^w}jO-iON-|X)HZR z#G!6B-Ley1d*W}?TpdAl^4nmhiG1*%U%0un75#o-8DGTmmv2dw-tt&ku9b>7LC$tg z8jlVCgKd>R{LJ&Fc@V0BWlf1-J;uocVt&sTK0Io`P7^#e6^Q6-ObA9tnzU;yFw8z* zM0BYdj>hatt^IsV^!(Tjjy^7lE7Xzsj@@v~JdQ_ocUBiii0^ynswFd!cMq_>F};4m ze1xZd_<%mBawUh7cl^ZR^zX&z2iMVR^=kp8ePuk9TV44>eC_&_-TL}wBIrf;>$4=1 z<1A9hD9zN|QC_HK962&~{<1uwAs(sQ)K6ZV4NKpRFSrCsy^QvL9g1N)7D*do$visQQEBpuU#Th%O){0?Azt8#*=FeW3 zc6OK+n(t`(-Bm3-yr1Gqrk&#^mkT^kKD?dmY7}7|Zh+CSbCNlLo1X*m#mX{$(B9~4 zo{WRj%65>9Qml@`Q-g_gHF~qTN-P|_v6XXhTh?p9or>!7JT{s@k1NlcY_+I2oLq&c9HdXtU$Bes=w_-*L-)ftHDxy&bW@9ohEMev{VB)v|IPd3c zt8=C829|c?NUOj*HG$J@5{+{|y1EKv{_nDFi;IQ}BjmWXeKsKy742n^Xx(0UBa<%g z$GVdZeU8mlAAnle&wjBo3+eu|z^PJTVB9FnuYl|Kw;apWU4heVFn#cWDrAH&Absqe z>$~tkq`vO?f?V|W{c3Pj>+5ma75^W>F1#FVese>vFM9I9nA&otl>lNx$U zmPk*n`QkOc*Me?)cLt+(ZU5ir+?M+$mMCgvl;uC}m#{E5T?@y8lTu;j(#xhh#lz!6 zTLHFBc9#?D>jH8wz0#u@PdqHTCU`y&CIPzPsM$=Ix4yG0tQq0^K_>WONv4f^Dsix4hVq$XM(7?9 zRQq`LkoYeIV(RRY-+2e&g5KbdlXL>6woz!V(RDhJhPvX<)F=sfb|fOXkivXk){wX= z7Uy3)Ygk}2CB3ir=w)BAUjLeCpOej^-!U$(pRQ!6H()-aQ!Qlaso)@7U`z;)90K{EOB%tk)ecf%nd7_OWo z!>&6E-B8XiA)VszHruNH`X@}oqaRm@E9KqcKVIv1H`R1`QGep~zi=hFdGBl zopGWKzjui6=BL%J5=i9EOl`emv>R}E8v8yX{c+FimVX%ib{rkbc8agLbJwxvp)(bB z+UX0(*mG8|*mfcYnRPkbg@4fvXROmu39*IIW3lchC@HN6zmgDPE03*puY4N8J0XLV zY>MnNgtU$slqGNBHj)LNU(x_+s&a3<3YB#{9M`&Q{w~`GLg*OWcp(*N19PGuPwiY5i@-3=MRJ zHzcUg=TCAVo;Y63bT*Dw8KqD28bDS-B3YNiU>;cCCq(ZD$Y^yCr#)Z-`ajC8IYk{bXDdoent5mYD2R}1 z#8~+q4Dvpk>OGQjG-4Dn5OaPQUaQhppzvqt@dkdmLrw|qX}M8+Z1kj26RN8*VI-Yw z0K8%|zY9xc!<_tMET^zt1xQZR8zwtgY+j3ORdW~KL6CgTLX2x@4GB?*X`nVHGQGCa zI2%dGjI){FJNE0m+z!QzJ{)QYr%A=sn;y{2a?;R_)@G&a9%0KA(^+f)g;Zwj-=ku6 zljYsz()wyyy?vL0ReDli?;^{S@>=lsY~Hyq6ge0}aWuW3PXzXoD2}b_`Nvmu28n5o z8g1;6Eb{?A3yDo32m-NbP=jiz-g&{4ce5Q8)|_r{X~sWm9A(Q+3K?)uHh%Mv&?$48 z_9v^&NUaQom-OE8__QIt{AP1RBZ4Mr+Lg9PwX}H9UTxf{?oo_`?8VpIPqxJk2rKlH z{GVBAbl=fx{4Hky_LItGv>eeRybodTw9q*K+dumUm{j^tsfmSKQbA9RFDUK&fOpn< zcwac03D}1~wIki5vJ4n?EnF;aC3f|z6!LN0aH zgf>Q^$gV6unxsJ#p7Q@SJ|04|ndq0XK{D9l;7`rm z0=Ehjnri+c%vy@?M6lt zRw_evHJY~VjP(j}Lq_3MBa%!nfUzp|SjOG^pb*CQ{wmI6hrmIgr1*xFKl(=4@oxj- z{Zzi#3|F|$gD6>i@%X3~j9MQM(-~?_f555b*R810g7L1bcpi9|OvT}iodOl;SE-w+ zjTa4P2h6asgUm6Bx{Q#u7esNF(W47@Kmeq$Pk?_z5-$MY)Jx%~swKF`6m~3zCHL)U zoLF9<2Rrd1K@|=)p*Vx8II+9BKw;*qu*5adsCNA!avHJTI4nPNCqDh`5*2ss)E~jD zpKYP$$tbailDL%J?Z)2Ots$8$lzYbUIy4m-gPmR|MGsLjmFze&OX(d4J6Y-^;&9aQ zAsEbcJlp`Qka9D81moZw?B%*NXIrbE^Tf|L4faG7D4B+#+i_Lcp z#Q6ahGjx>?Yti&3{~t-sX;;=(9%*3o6IkX?(VikNmV5ZC6cvymRUGRN?MAtTolB%$ zQ65QU)Wsp_@Vpl)KJi|$dhKNgIlI}%W5xua7%4hZOW1YV$>UOuqL80`kxI>!1Wo0qd*;?Q9rE4WP@l_fH6} z$~m$Ob&(mE&1a~8duGYCx^5IyhM^3w@mD^4Z>r+8dn;tKsJ@E&Kb!x>II_?SP9^xr zDlbMomr%p0mK?}9yBl+xicyO3u(=7zEvbRKXl8L!f~v@#rKM5OII=mNy(%yB%sd*N z_suB=y6Js3uaMokpSGpdE+q zp1jPA9%_=!QV?#m`Jtke#-IIzFb#}#mjsR_Bjf^ASl2y1tAsEHQ$2Ce^f}l7F8|CX-{q`? zn2iL0s*Mo9BN;BwoGzd-ZA^!&d2V;Tq4Gtx#sE9LMPz})b#n8N7Xy(Dop{Z3J|Eg~-h#ke-7)j2)A`rt&t@5C>QRWl>)XU~} zQ-{?wt5gtMHu8b0ZjGjesTu>433YwgSDg}lOZi8$Ea|m3;RTNCmxp>e zgnBv$FOcu$GoWX6!l6IgZaX@M0aCkdIg58H!bT#jw4?HQlX>CtG#~KJuqFi2A5GT2 z0|yc_)ta|B`L(rr#K4A*f)tB456y*(hA?~D1Lwxs)obBB8z4UT5$zT63V;*%wslob z30{8IU}SuNQ13VQkP{ZbG+|2|82|=+eh5Gae^$I51d~ETWM-34<+K*!5mPVK26-#% z5P(jSyubNnp;!PgLb}ai4f*Mjp{F1j`Z?mJBybSC`%*ngAmLtp5Dg~{M(A6fCPm7G z4%gmj1upVbdU$ZLek!KIziyk}@g!?hHRm1(#vJ>t{3>S4o9{LZZ(JZx3*e#7nV=?a z^wsqn`aIrErQEjm(_N$MBvGR*-JwcP_HNHqIy(?gV3SQ;;xFUTD=jZ+zIJnkIj zg1i>{{?j3^fLYDMUxd^^sQsc3!Pt8Hz>jR|AWv>HALm22ZZAEpEy`Pdue_LBfSJK{ zT5Q9GI8{#Yld~Dp^x0KDDHQ#X^)f3ebVg@1lXIK+xLE$IMdh$4Luwc(7bze2w_^Hy z{ri(dJ}7Sf4cri)B>n9tJ?q>uMO_a9;Z0A80PbwKB*j%HB; zo4_T$GESyhmq7Mqt`p~4S!2}HCU8WT()lrc5o0yTHz~|O@#G79t*c7z%9fz1D}(PK ztUWG9Rnl&%{I>4Pgx_I1R1O#2=9vwJ$r-JoJ2Qo<=M5f^d3+SVOZ!D=8GL{%zdrsM zJF0%z8Z*iQu%`R(ffvbB3Bdh&bAT5Wv-67?NSBGG%_^Z8o_fIkD;xu@{7ny8rdTG=9zH@xxUok-7M2=FN=c#a48Y_ zd=8n-@hCctENNht2=`zZ6Ygzvh1lLGgv$_7qCIsuI+Syc?Raj>`zmnlX~v+(FxiQ9 z(>aI-^EMiTf2m3IND#ym(fSsopJt0HUxEr=%)M)o+%4}cxzlR^p?NQO>$K0SB8tp> zT9f^zddEe_@6XrG*6}_{s0Zn5CU4NPLoqFY>%SRsRm^=OJiIq^bMC7wW`+7JPX~Jz zUX!;Jfa%^lQMV%6Y&AT$4p72pLe-ak zDFKP2w7V^GNp@w3-?IVAI5_Oc!2RMTOaXzS;<)-ezXmqLZJ7&EblN$9WXCzpJn37T zFEP8WH$yTebWR3FY9tEF5*nQMnyYWFvS)i*YP7Q3ml6kwt&X6`j=6tW&zyrl>(vO6 zMLbaAl$X@Gw&~RAaM>|yr}qig)}q?VsM>|Sx5g=_U3`@@!({mVz(0ghzTow8Hg#fu zwwAAju>{pq`mY{0QvNr=GN@E|xbJapTCvl6Ep(CbPOkD-F>hx(8CPJ%VdlMyJNJ#w z9%+LjxaD*oY)YNcHqdWZ+#rm`gup&jJ;8c@F-91L!4g->U^EqB@w(InKI~32-nUbE zQerA@lq|yiXl@<9g(D6~08B&dt6r{u@*hw?hN9}NM`|+D7%{D+16o^`jrj=G@!Ef{ zsR!nZTAD5@{-Z#7Dlh6-aaU$AN$)Zk46*Tt=)JR5QR8fs$x{pYv$>=eu{OY#7mABT zst4_Bhym`f{Z%5~X^wFsrc$$8-}KJJvz#@E4y1;(O}#1D$1wL6uEkT78ONiB;10UtTDVSbUpYBt9!v&3Ph+24wN)4BLuiM6dB z2Jp9C0lMBPxDAn_&r)38`gHj{&JVV6LN&=Slwy1~50~C6L*ejpG==CVXL!HnF#Q4L zftU#Kw(||m78!XWhq$oCocdJ+Fh{WEh0~v#$*@Cb z;9lSK8k1;ALUR&6U~TZcxSV;cdZ0gv8YOE*OW0K-oJeb2T}S&%5hSdM!%}GKGGF(9 z(lUVi5Bw5u*g7Glm~FcQTY=7f?)Rx|Q8c*1Om4iMBGLCpqP0Y(^`ge`4N{pnZ z^oC|QuYH-|u|;gqnE^C?Ng7$E%SMyHUQHby0`pxo#o z@a1Uq%CW~RQUy0Hi}V!c3B(F4%RK(}a=>wb$veYuQ~|Zrq|~EQAgHXua=7}Sg2}1! z?w4l5T$&skjO64JqK4idwZ43dT|e)Z$?$-)ncUHL?ASOPT$MnI=_?@ZYQf2NNHu{W zxH`?}i~qqe?mx6}hFiCmf)8u${Ey-nXKi-4C!(YQIx$i}X|lmL5#X&aHX8fQH=vO< z2+#74s(bwHcA2FjYxqq(md3)qMmIKE3Y2`C3*py~?}H5;jc5)Q5?j{?zD_CQpcyQt82gVk*Db5F{6vd@HxYWYKQkp)fupvYr>bH|_?wQ7 z?GB7xdFX66yH)l|FX)Xf?xu*rW;UC`o$2Bq6Kky|eSG)B4hiasBez2~v3mP&r!w70c3ROPZ z^lyvn941{80b1V=I6jRv2;cTTKMY)34s)OdfFsSQk)u1umy$vp4c$R6kyb&#P+wDn ze^YMiA4O}V-&8z$C>*|>3)JXh`SrDMGY8%qdC<|+=A32ltUrXv- zJ73FMyPMKp&{2>v+q}KloO^G2e7HW>IT4Q{oZA`?0#iq}3`WylpN)=W%AMj-)5W-H z#JHozC;zj8x^Bf06Lv4cLn1F}Er4rW1ZAo#9+m8xdt%3H25%|{bCV(U9+|#1LiY^t z4{Whb5A^ed*>PCmif1?fiKz9Q<#^ZO#HkVdhsH>rzfWPIPAsi>~&2r7sxd;ASxk9vZeXW25H6?{P0lrtIBdsxaUu z+8`D1ylsNor|n!C7Q7RM%#Yu7ZN6=v`b=kpYwNf)ifYDSJ4QX-abmo20Fc$Z*#DCl z3ocE5JB_VB(e|CI@WTi1;R<_QWH+mFvT^$P=*G2rHG?yyREaCF0w9*DsKdWR4a_ie zvMWsVSd6&)t|X{~I2$ihaSE3pa1P?eAIu2C<8tKCN(pa8%0 z_3J-%_k|bg2nXh9vCpX9+hM}cC+8pYke{wFjbA+zY?$A!fLKa-C&xQ0(zO#X%HKgP;wC` z68WsbmqU9Xe9=PtN!7R5@&iNa+~fa+G*oet+8k+olrk%tcspXZR^;UiUt>L8Np3~K z8cUL{_KLYwQEaV5g(UWS!QhnBs7&Cl*%N1Em1Ag|@f(d`@YI?C*hoAXLYMw{Lzwlm zjX3gJB^}jx`FaaA-JM2f(oH~JBk~kDrP!GVK81J5*>_<>k7!PUq+R#zOm8^GG8qHn zGwCT7GGYeO*+mhO7mN45p3?D9Xr#pdYi-fZCw<=&TV6Dm6}@JX2)~W>}SDGCX}NosLikv|B&UH zIDLxI+$In!XlQyV9ZW3tH7;!EMF?pgJj*vw)+xPlEon4|<88r+U0S1CN}e$2JYF&57TXG?Ta=%c9BySB_EZ>*-(Hm8PxFWS zZ+XA0q%R=@>NsiZx)&Cg2E8FQ19m8YVXNQcRJm&npnp3)b{}2-1r_K~bq=>aZdo~) z?T@fcNQT_nD6hyA(_QnBLh8g040Te0J2-j%O7YLc&+FOh)DB2L+y#pVe!Pu8!E?}9+2_3dZuu{^PU#rIfR5w#ZrE0O4y;dxe`CDwB79TsJMR&|ko zXKmM)Nk-!sM<6M&mIiQd_aW-eI!e@RnntDfGRjUa-VSGX9JG zLt}A|ADSRhtMh)G`UcYjYF#dFvIA_d?uTf9(>R%0K*CBSZ?CY6!-UNI&!XB$#;Duf z=QH2T`L>dXM}k7W4iQ?*V2I<8k(? zx!k|h?9`V~C`t!yp&xTk=soL6f~mr zZMcu+S8op1=W1@Xb4e*e2UjxKX$a(0b6!U?7TY-tQd+YJt$RK+#MU$#>B5II$k3cH zrIj*n>TrPazNuz1B$#rfQm4N1{&xYq<^z|)#%3BF;JX(qZ2i1fH@xQ?_m1BdR83UK za|u1Ep|O%0HrEQ371%YaTzP}hc*@BMK(yjhN2U$VXs#`g;sJJZ>+@)SRsAHY+C{ST z!rBn+Tq|kzT9k2Gca=kg+M8|@&*2N-4i7B3h5#acO~}{zUA7^H_o_A9A88gai*=qH zySviz=Mp2|j13%qa5X)f&57vs(FvySPW`XDGX*nHCtx4%aTSp3H+N5hR8@~6TEkq1 zszq+F_=C5eX;X(fytqdzb7#tEDB*+ig9>46-STMDUe!r%@x^|RWffL-tFxmw@fhd|9Qs!`S$xZx_2@% z&feA-#ggfO*LUG9iS$D`;@9*zO|PHF2Q4ftP&1?d4RQ+H4LKZJ9~h1}cD6Aq|984` z**VZ_2I_9`YeRB$x`{J4|Kb&eTz2_e>@zq@!>nEFfnAl*UN=19tCa;dl&Tlon+IYT zLJ`Y0aowu=%_S22?N#_O;tR9LkAwWiJCpQZrfs8==?^VUu@B;L5&4~)4VHyBum(#m zN~6ddaZZN8%?tIJ4FW=d&HYjda% zWDvj`YgJp!&3n7RR$^<{Bt6c8@||Oxb){TmiFbIqF9&RZ>5w@UP$GwR2B%% zy!6X5F5=^1ST9K6Ip;V6Zc?ypxO1`rPmr>{e!3w@K@1&(TQ(5Fl(_^#?$EAzvss63 zLU+v&b^ z@?9Rw>cIvW9ZyMU=#1lOHG4jhYRw7*L2teOSKWJ4R692O z-bEWPQW=igBmM?IVoI(ez=Z~ADxW3Z$$gP51Nsz?ZzYbSQ~QDl4(Q{1o$TGXM~+7$ zr*>IYj&`tMIw|?;@re)l#?Q-A9Q7D~u0szrAdr_D)By9}pv1y&J!y=6;bcX%LjvRZ z`7Pw>&bK5oE=}}ug!@--P=Ho5E~LWJ?W3ek*QdCC)B7nu2MmUAR!y}0`Nc~=PX^}C zOE-l5PQx%>?xinDMY#-yIkiBH(Yc2O@n z`e|KEcB_WbN=23d3odP{IQ>{%xSc??OBiUoq$`Z&gHWDZ(FE5O8CPB&h`JDJ$N1Ut z6Ln(&Sv|!#7yJ_Wym%>Sv~tmJZ5Pso`yKoFAc8vZVQW_-n%6G6rMY6>`mBFi0`~%z zKx%)0d0nC|J#-x$ec z`&mI(_g&LCir*4vOXKB*ESoE(YYBUz-R&`DEWpn)H-tZuwaD&*3ghV7|#VlHnNK6YW6k2Gg`-MRK#O_|2LAbKafO1 z?VbN0n%={a>i7L0KQrg3sWrlM^$liM#QVCHAA&z9v>~T1Ukj!Hq z+p+gP$2#+OdVfE^|KQx`dEM7_-PiRPUCAguycvfQv5s#2B{yF=yWe)r3&)O{c%Q|# zKQwM{-LsfBg`NGC0V7S6sTPukP#@j1)tQk8l)MW$cGzmcw`QMCT<*ma)9`d3S4?R4 zzSrjpz;1srW^P=I5_MAeW^!R;(Cvd3ATgbj#zp8Xfc@Oz8w`2%Gt6rqdj74&s7-I~ zV?RpJHzdR^e|j7+66JdftJ?hyvZNZ^rvX~UW(*{uXJ4$-O7oK|8CeJnFM~r1c-Oy4=4hRUh*HtB zsm`CQFXX%|MQ)wsw$Rp2_oIjYQ_E@nX*MRjZ@$CpD*2dm zew4COkm(*2AJaq!uKKm&Su_4 zXqjF)NHU?H9f`8=0W~zMCQ1E@`S`kVN;})so@-mw9WN-p+O?%?)GQ5{=%Ay zXod88m@HH6SJ7Ufq}3~P?|l^_{keE6h#Hy;{#t?Yk0+l}nu`cD?X+^f8UYz?z5JQdn4+F-TomSNz z%n4{}O~ytJHj&-LXo{!|V3!~qRK=Ra9jqdDW6=Zh?3WuEseLK1C$-^5Kc|fF-G$;D zbWJflpkr^l5?cS9n!^4c?FPOcH2a9gpdi zKiJNlg2+LUwLGffQ2$J`*?rpZ^3i9|9>lKqw5yclG0(3>d*kMth1MI6&iIKFVSBf` zfZl4GRv$9O=)2GKt~+-2>nA4`=H#m%Z6Ts+0Ukm{*2v=mT}A3^4DhcRU!%$=Nqar1 zZ=Fjq6*hjn9oE9hZkvxe-gdpy|8+KQXuTT#iPNCc;8C(?nU}{x$g&c4Bgb3h!bAO?XZ1 zYCDoT;qX_8SYIP;Q;|I?6ChUd=z=oKV4XH?V+sw9EuJ9Ad+5PGh1%<*B_FNoWhn#D z)!uZ-d?UWyeI4dn57A1129-i&O_ziKyqqavhfKIA(~-jH8usS=vE#IWw2$@x8o+h} zOQ`DssFFUyXzLfiXQ+3g-W5~6k~**~-<~yd^en7IrU8=ggNqXm|8ccjNG+ra2Qeyv z`5toA_-I_t6N9&Ww^Yy@nOf1rw&A)onlH zgBjjg>KSqq2F;8sZW~A&GknEB{B@em-x2O@hECC76>TN;>K>b5!;qzwr(3t+71MTm zf#ztlSBw!t&!~EUHpWaE?H-@Fg$;Y~|y}7#Y3ci;>3WIdqaEJUX^SEsQS@{xo(fccpH6Vzx&}qL=H+S|Cu5&k+Q)Oih&7OcQjd7F6|y zw@RF=Yn<$P8G0wY=4=r10gPanus#o>aq;o;jQdUNd;ahyqLJuQI)-aeJ*aapGPUmv>mJkHW#~8LnOWi{}?t$4uJE7O%AN$Ugh-ZskSkS|~>RGh2WE z!I`Rs`-R4@Y$CjL*HFr}3cEq8WQuXrX{Yn;cA;A>s-GaqFW=;i#?FO%^E%7$ziTF! zTQNM&EnSLH5!PRRaA~~F3MmLqR1%?ZT3>Rt8gjM2*KSDP$Yu6ssAwXtlFO?UK7jpE z&f>pW)iu%DVjda5$vit6$g?wnJ4{z8nAp|K5L9oo;&|-z;Z>Dr_URq!9I*LH1#Cj* z`OQt#_r4;J{vyJ>({0w)PW-F6Y1L#VFL$dY+p-_0oVpdp!_6~BhF_<6U(Diz&R;f9 zjEUKuc>6&;#h#Sn)|(EZ%IUO6X^(zn1I~}OkdA4etC=Fo|Ilrr5`i%3z*Zgil3qY% zU_>ckIBt1c%v3SZ^QRe!ho(*aAN{%Ua;ww(bqVGJQT(fcm#EkF@)U1qxanLfC!l1I zVl=PMbc-B)a_l(o{n)0-3zKt)xPYyno`EnHY17naqXXJo2+fuHR~dt|fh6;^^BQB! zhSQ~TO#ta~yriHDz+DE>FW7s9*FwH!s8^%>2aQwHFDv?=8A>9w{?LZL@E0aVPwMatvwLAI=-}Y?l%r!=PN|lIz6n1HvUZYp2eniVpJ@zm>qqV7WonkRBg#!rlz|ZED;saal^gJg)uP|FDC*+i zp!eR197pU1+L#7$*jw2|kkgh?y96}hzG_MH$t#F?nGm^AUH|EZJm@|*aBK5))NGJ} z6jV~osw^K{CQO4{4`_#0dQ`9Ue;nY*eiXLWL%RDJDyGAW+4D=9DqAG|EkDJaa@Xs~ z=_vN?5NC76e81o|p16L4X9sJu0Mb5*z{S6@f*10~!xGbES!K?rApGBb(7QpwtZZgn z-+sJ{RX%NLZ!$Uw@*Jr>-2%w~tp0TfZSfn5rh$B%08h1wCB(*-M=y!obT9Kbg`MD< z4!9n=!|uLoWCQu&ALX~@Oi-;`g3!;8KdLWb6M++Ea=TS8k9=VFif2v|9|7D?G(fJc z$Iv^KfsCCq!)>;jr@{0K!JRx2cink|3pC$q@Vh%F;pa!38xLvOCuK!|{ zIxOE-jt|ik48NJ5gj>&$1MSUF_oR0dLa`fF9T@j_19#$gdDo8KCl8wi>?h>QfJTas>HvK;%l znI8J91oHJX#KGx`>Xu2!Y5oN=6AQquWSm@bAIdcglRZHxhR7^<9)RBHuP4!a6?`5C zAlJtI)RmJno*iq#xlnw{N2{Bh-Fz4$57)WL>+pc(&dzc>Ug3s>goKIewzTluJU3lN z+FZ7L(6gu2O~MRDVJ&@<=9rpioOD0y91d$8%6|D>ffN|<>ZEv( zh{I;@@;4rzgy_i;*~`h1VTWe^d;Kf?VKz-qFcJ;#?wkORpV4YQ>)x`J*Xa!{1Qr$) z_N}xn99|{lWoi?xK`y5p1Ea zfQe0r6L-a&x-@n9C;aL8r1%-2qLzsosVzyi?Q$`M0_L&&t$v6sU4Kb7pyDaCX$m87 zU&?9(RGaJG+e(&nlDd8@UvxnvnDuXtZO+YQGhB@iihI+{2a@qBr`=cbSTGG-DlIRs zjKwtju)-c*8}wDoa_z7J!JS<5&hf*#mKZBu-qyP}Z@|0cp4U%w%j;(R`N)ml{*$ur zSpWUa!i8g5nY1EBp#5gcJ*~~SA5q!^)IVV%z5}XOE@_e>yk2V%qZQf3fH&%q$I+-cdm{F>9;HukwePL0xRZi{19jR zH=XzHK7m;(pt7@c9+u`o1~)mR&OV0(Er2%IZIoO@C``ZpIdr&0Acji<6td#8*WbHa z<}EbmOh`}-G?iOtMz*i9RB(&2lh==%s~G^?U*TUGcz|Y~+>_P5>~RU5+P-!IQf%H? zfrV&eM<}925a$bxwP7~rA_cQ1no;?pN4ITpzN?@iRm+K)pPdDB6*5WnI-TjhcrP3M z0e^l%UG#cFT>n;2H15p#&<#TghlZ9QtD3yl2OO3+cI`6dFmQWps^y<^?c2s!=ZU3KE_cg*bNmgi7C z>>#2hV)*hz>beYpmwsI=5Jj>D7c z2w{pi-88zt*6`}ozqN|EpB^wY^p+=VqBG*=cV1qnZ-&s?`>(bZ&oshb8#}ja)Tlf# zDt*s>_~9=NWpgR%sqbFt`|s(*iN4`JJSbHeI}bA2{Y1rE>n+ygAb6-T;_cBY`vup zG4?qggIdgNCuhLj^Rx$(rzPuN#mP&A<`3~5EFXTHM5;&ArpbHHh~Yw}%38`ZBdFd; zvQ((F)hnK}&rU;UcDw!rh;v(@i=l(23FcUI%ud@cv=H0}LvvN%&}Agmi8-@K=CM9B)?|w4%|vN;8tJ3> z7X=Idd-Lm4*jau=UFE^KSkIsGz1;W_kwsGnzn zMHg}cgL@fh%5Jd$7A_149Ar_g2Ceqpug+@7(X0J(@$x#{ee2TY%mWyq15_4#A`15& z5p+1XC%dyHo|8;YlXI+o1hhLCF=Va2QGF3U$p7^`Grye1o*~j&^qgM?aI?RxAu5*E z?7xL3f_g}ZE}-$9s>3X5NnuMpC>w|lHdDBc6rC;{)LEZ%@7wRLPjy&&9-{$D73h

$W3+SakwamMSe#bAlD6$T!i1OE2XqOhK&f&m)#emzVrf@@@9 zPMD{dD;iNFN(+4zh#pAG76#4Yy^rum4YNxf7j#wBg=*KDViBz^Q&!J7ANuiHO_>3)lI zgMjij=E+XK)uW(k^g|uljy|LV!HC=6_`y^BWoK~cly^RPcBL)n9mYB{ANoOz@o2T1 zJt5yYqTNvWbR;|0w&-2Y_Hh;O?H_atN7VP1mBY`UKapk7M2vXjtEO!X)e!xOr$peJ z_v^dYY#3g~;VE92JoSZr?aMg03F)DT*_quP6Qj0U6B!^c zyshcntpwRxioH|4G>)@!WK7oBQ*XxeYp)fmS9je2djVA{Es~%yq#6uA@>QkANUln-@I2n`=pKEnSLXCs7}^Z=3)|c_a;oO`F%y>&U1zGAc=^- z9Q8A9c#xHP=`2P69AujJXzy{0xL*5VIZ_AAMds>j>Dmc?RH~L`agmsYs42&jW^7?_Sc=e^$#_IhnV%c;?M-#R;C5gI8p9MD6LC=pkgw*a3^n0H zq;lmcaePpH&pM^r-`H6)WsbH?wS0I;YW}1}$1ME``^=W?W>*ZE=CZ~@^H3qrn`>h2 z7KZM8W-+ub#M>Wbf|_a+Pj`+!BArptL{=$#8}p}O>P&Fwlu1+AXtE2~dc>S6+!4yc z!yZGZ@~sJf{kXEmXEb3aZ}vKZJno1%|1hC)wtC3ks5N)ZsZBK{oj30xnDDvRz0A$`*mQ%HZvgi{>!bDf%90>98V{g{ffgIedzcB>V1s6 zUy#8dEKb=)X3l0e;NCrf@pY619Upyytrn5zgi3 zDHx1W)uLj~Ag)q8Z~?`5JA-+JpkY1>l57>^4W z30{)X|8ib#8mCF!dAySVwAp)8ofhHV`!TFpg{6P_XMerw0Db@dT}k5r8s;_Of?XTv zmhr-V#&vL1ED%*OEv+V=T}}dW_rxjzXWsx~GJYEd8Bd27MH1|+^HLuAl~TZlOH}6x zpL?tw)e^-HZ;_bVh z9cR5Bdt`KC2LXhJe1A0hW%AQj-2KInkueve32qVQx4A(e8w~e(M(LR=b}rCTBH+{!F)(P|li_ zxqcq@wsbMt)N9Y3#`F)mupkM%b;~K>rH^E)PMi>=@;duQ@7bvBTIeG*5bOJX{@W2b zXA$<5C2~or_14Yvga0~~0Vwjz!zwE5nm04eguw|d59aG5onmZ4{sa_;s8BXR3b-9< zsL^&q$Q=v8ecfc~4En#bupaRE6jGt&BFSO1emWTxYH8i5Rtex< zfVW|cqYJrPT7cS`G-eE!Ho7CjQ8^n^U$|IkZft=B#(m7rKWw3|)iGokz1cuM=Z#kx zhf}{lRgpgZjX|RiLm|mbxf0m8m+l?4Ucs8R9yIUJKK^Uu5URgbrzGBXVRyb&ku}sK z*gt%&w)D{7<@C>K?*nqq?1e9;%ajq@6&UM5tLRWepJv35j%>oA!7u2Zp&-?wnmuw` z1#*0IgJAS|Wii_07k$Lv9{upw_)Itm`{Ud$8kDA=m5@oQDDO^80b9m`*()CXEZ(du z=1$>^Vhj~-(6%NGEEGxpwO|3o-RLLe(<#!2rk zc#boGW$(9@^01|^^Amtm=FhTYFQr!8x;9xdumCm8_?x&O;_ zEk$B23yOzEjU z9u7fl)}|&jY-y+PnSRAWB6I_LZt$dSF&toj#il9lyBB2Og`L&unfUe_BCMDFrnm~p z5{aLT3qc%LI$YerEK7(B4#@QJWY9Ca-Z(3)P<(5d-{9b6kLio%0mO>?ks$cMA}7fC z?~CL~eqVPha*0Hx#;}i#(Y!0H(;fng zVB!xw*soy+w5k73D{TK?KE5UQN9kUhsBYH7LKQn0 zF^&oTHaEKKGW%lPv=>v3P4 zmVv}FA^cJOuysgr(!;K+HZO!|7#rq~WTJB)2tekA4d^N73r(f4AF~voEK->FuH1af zE7zlTOS#XxmjI0#{}YEbH6c@mF6dJl0o?z+?3YEHCJ1@~S!gS*YEHvqusYuCY;j=} z6OIU5b>wx<4Xy#&c*j)k1ZC;3^w%#Ct<9j@TdYS*WC1+CAbpaIL@@gq_{Lwx>>jL0 zh6hSQ%G)?>H4VC0g94FPC~3g@6WNi6>?42CN3|NXXOb*A?M?h9+cgiXx-mIDB||DpSAlnGtb0Nkdvde>3@vcPc0wPAGIoSQzC zYDJ$hV@rgMzmAx>xbSR{>-h~8$y?gv%MYxkp(iA)g{a(VpVwiRkJGzveuOGb?52#C zU6Wb};Cn~-^5*~${&xc(C2I3dbi$t+$0RQhs-p}TDeM%>6<{elJ8Fp(?LOTZs*eb5 zdfe7mvKC=++hR5)@dI#f@cW8|;IbWij}a*}RSTq7QqEh@%-rVzPm{Yi*}kFr!i~nf zVnqb877j5g)dc8X{5B->do3$d&b=wkjan(L{Tn4yy<9G!W^_-XR<(!RuCEDk?o?df!Wc11|TDU7}_U9G*Ls=0a?kw*FhY?O=h7(LTzvIzNwQc!uK~3`Ls4Y zB^D|m6i0fF5 z+}^rcwwAP1mz{ICficHdZr`DON2%f;#5b2#(pnjG#<^{VTz{3YU8>|^c@rBBu(RWp z^SDU`?3^4%#5@2NRgRn_DcoyjT7nnw<&(3*DxYsKZ)IIjFr>Ewf3caC`ZJr-O6xW{}0D(yvA%s zu!6E00F|& z=a?o+JDVE&z~_Z94bi#sSrs1ES+DHpu(eQw3jL9|dRUZsHsP+lkv*D1c^*UWE^a5> z<(7lud~JXGzw54I*Nu5sqBH^iK~WkhmkwH0m6W~Z?>soA^8V%bCxf>K9`jSL%Rg$l z&D?O~_J0=VTa-IrDR$+}BpJL*%Q|7P%B(`VF8&@|tn-+OSS+Gt?#*Y@zBZ?|cvnHlZI@Hfqr z$W%&R7Y+`^h;^$LNFD&8a%s#(@9ScQs}gr_^FF5snEo$|SzD}bfF_9hnl$J?xG2+W zU7GRB5HDu5-{+xqIX}uXvnqcNY?R z^duS_dQ<~O>tO?j(X`vaYFgJ;n`8H%5#EpcO>CEF@&iBlrGTzLDv^~jG!W$8_I`XN z3o~YwW#nl`nC?~}{#*wHW5i=OKd*P1G;jbH&G#1ju=aFnu6(rcL_Re0 z{hrpF1OBb;-3+;FG);(tNJPCPGb+irx?ihivt%!Rz*e_OaPCc2oqlISMqRzK zS0Duel@8n5kOt2u7nN#83~epr)q9ZZ^@oM_mW)4RGV>ZM{DatXcv;ME?)h{rXefj3 z@h695h%lE|MY>3BmJV03^YYLFO8;kjl1-c!9V7|SNV12t%%LZhEi{rJ%5{ANvHx{5 zmGeen^H~5U9_LHTYK{PIS2WHJ5=9xpzRm*!oBQE8K0_Z7vqF14Lk+1LHkTniUPQp);yE*t34#o{;B|r6RlxcFeUaR>0_&%)x-CgLM- zV&3n^7V%$u=6KAXhEBKiiN9)zc1YaM^!@~IR3`KM<QOHJ^aBhPW!j+Y# zan&)ow+7D5S9W^`=B}#&i?}|E@xy1Ycoth&rD|EjD)vS$dQm<-``~zj+;xjtczi@c zfe5m^r_JF>=9+egb0(u)0s3YHUDLg3PZk~&!-o5J7{W&{calC^Q>bUK*0Owa@owl! zVl-ituqsd?xlgR|wuS3PZIh#}nann3$u8w@3nbBa+AApqlz|18OC*E6>& zlL;xJoV4MG33K0M26WdhFnN0jvN=;7eg7|QcTv;!6ydYB3z4BT(L%mKb0@@PADYRv zOL!oCl{#4H@mPIZDPj6G>S3Z62B`4&?i{#kT}fmOnKlDR4e1KePN6p{Bg4J?noGXl zea4aL2k*weU}Zfz=xADatH-C@2z0M2AS%^mL<|dls=&`YIOEVY__T|i+fXrMF|ckC z*A&E&@d<17lKHoS%GW)&l7Hjp9D0Q7bi#JIvpf>mJ-IoBgJghJ!_Q&{L=bYmL`;~Nfgc2CcF2A z{OD!s<;^d*ESIe>Zx8LJvSQTsY-f|a>L!rpU7+<>l6f3+zaj+Pr}R9k_7pl#@#+FE zCsLr?IQu3EJmc5r>Gap><0NSev^kw#$9w}f%yg2`*=w6^x9*|nP+*U$;5(2CyO8!7gH;I)0?D}6(4{HrCb0oTFug6uqn;Gy37@z$! z?oQ}So@WxHytn0R;WHfB`>ukmEAC(dxhTxdf1KIRO0g!*J#uGkkq~8W$!E%Ru55HM zHXGe9)C6P(Oc!EB!u_|-Np9`2{rrf+x`(oUe^EzcJlwunK4dB2RGBj7QM;zc?`cUh(rD*))5-hp*z3oS zxPF_5`I`+Sa#WOxlj<9Hnba&Ol|d2C^m!`JuveXcy_0J+Hsk-;BozS^1Jf#eWjm#k z)zgW~uqn;K>&kOwvx?ov{;1P(ZTjmdF>Oj*1t98~htXir#D>a#KmoV@O-k~U&ma;% zzD64LxslJq&?+8=jcbEGij<-%9O-V&e#cA2yw-trOI*NA|_D6_ovU-m@w-kw_Ry%2WqX z1-p+VtxQl>$BOSjYMlBqS{&nbT?b5}uZEDVAOC2s?8(lZZX+eUxgs*Qy&ukGW!t73 zD}^8{1rO9H?bkQ&W>a0a|Mz7Z{uLoe*OLw6=f19IP2-c|RANA=JWSurxu%6})>=J6 zY*+GRCymfQXM8p!8xoiM9(~B!ouGsr!?qR|hgp4t4=#v-)=XRoUCF}k)z(Z{(rF)8aI zLAKjG?@*}kk1pMvO6O8)B~5mRhmqSNr75KM+}Zf!sq|$?&xl@zqjdwnYNf}^1R|Dk z!dQmJCbsiQ_6nX(`4Xg` zTzbtvZl-1OqK&m!g!edxAz1PaFzWOve(r^IHwRzJPao;w*&S9V7q1_STF5vD-}8|K znim5XJ@YB<6)l#DC&x3gM@k<0`k-3_gGzF@=bYM0Y8@Tc>6TZUsPIeNoX?D8Z$XID z*Ns?8(fqS67x$y5Nnp#h0=-@3ruAeoa3mBr9L=qtt#Z7<%5*|1<-B__%-i8)<*|Fn zpTOen-Abfj2J?f}PMQhFwv~d1v2L{E`+BJbc-GXd&nBP5zd55M#F+tQMgr8^87-6J zO(5@^cRevDb=#*j)rFbCtbOrVE3E$u8Cu`rOxC45*P}Ir`-nM6OYn1ZR_(+zGH_?e z%Sjt4EscGvnVa^E4*E0mBJ*RB9893EKq6FBXd?B;rh_qxOb6yFfjHe)N}g|_{#z4WO8`hx>)^-eHJ z!0(TOMmmSn1+@1YnzDECG-Jdg&DixC`+LEDjS_%3A8Dq-ge>A8GnbyPHse;gLTpcB ztiAKEx0<^*9Vk^Yw-8#hWn5%i-89WpnRRYG#Qm1`zRVZmYPL@%V32p&wk&+|{QDAa zH6R_!_~_AiJi)M40yP^8H87@YME1@}MO6Q!T48AlX(<-Y%yrQp-R%uX@ukM8JvP^a+=y1ev3JJP41!V2s|JGm#RZB95 zNy+^AU@}7m{&C#I0sbT^p zhTW+M?*xuMZ!$Zf8PbQ>hG%p+RnDk9^uyzRSXWBDy#xTWD^r^$Rg7ms!lzG4B>_-_ zR3@Fp@l8dEAV@|5oXVtpbi2{rN@o*fNAYBs6zsD)^N1UEuSl_!EojxCGiHg%?a5p> zwUNow9ufaMn&tpj2!C2KR4J=mSJc<;19q=@c3ne)IrW6xH*zz=`xrB+VCA2O!3Ebb246iJxz1(w90?pdgpr=)5Hv1$e+t_} z9k}>=z@?Fx>8DV$*jd9vS^XD^*_q@o>9$XE8t+HqTj%N{z+)C~{AMsJf!@?+KO&B5 zwqefXD?}*7Wp!jSlonLZSPBaZhZ}oZ4{aX{TUc)*U5E`C*vL4;r}#gp-uxgl;y#!9 z3Qta2%}P{m-VBgdle=+7CRKc^&UH4#Yi$fS;_QvAnLG-U&j0TXh`G0Qy(o0S?lF7&oXdV(IQX${XQ!bVr@ZMB z*ad&?ctTDu3mbW>OdU`N$Kao1mFoUjBWy<^gadWXLzXixUmFKh1m&#-7 za^H$+Iy<-czr*p9GZt*qWptqio&%*PB&f0J`W&s*e^q=jG1y6*tc`ZpLAvHsW=XO4qt{>Zs7I4{8vJ>zkX^4^Rx^$%?H8i($PpgIA4wbK*LFxbSTR z4{y#ijFlU4TQH({fEq>~@95j8_*LX9EP4Q+5yEcgs@x}}fory~jzooN;=;@MGzRnq zcTrl7i@0TAOW;B72?bU<;RJMu`L2;9;vdE&(`sNfmwPX#{F!ER1*QI{MZS7W(&l@x z$BD~bs)|hT>E4{{DZtZ^+9qsAtOs$so<*A#h;`b%-EOb!+0vqbo^FCF1UNr!|8|o& zy|U@#$>+kHJ|lqUTJ7WpqCo}H+>Pr=&F*{dlc8hEGT45yM_Rb>GfQSL#(ZY2 za2iKY_+NHCx!AW=e*xfvF8Yk-{W)OBo%7tiACgo0w<5X4tbW(WdeudCf>$*eU|5K8 zLh-AFZLKVBEk-3o%Ecz>egD%c)0gME;hd&-{dxsRJ{!hrS{nFB-XnZBAvlb9@(ayu z#T}lxZqGf}9~=HS0Tq5eo75YeO8axlgTU%fIFu=P5r+d@nq*Hd23qr7I{@z>Tj zPq9x+!Z(Gv^`sxy-;f=8v>+XFx);v?ki5bIu(_p?w9Q@_ExTb@}c1!yg`H zi~B^8GlsI?vo&>K7Z^JjPHl$@bcF_);bE^6zsbl6CkSpo zcU%JpZ!%2PZQGYNk-!kjn_h3TM+Q{ZjuV+*GK-0y4dR0e+h^XsV~1uGM$MM)YMSeU z|BjlEG@9wW{1~!Z?1&(pNj?f~1g2?Ry4iqWzdax=h**#FXgNwNenYT{0|P2(Uf=AN z^jeY6jL*40ztE}}A~D9Z-v4cb(TNh4_1}jwF_#t;sG^swc#`Dg0zt5=kEKgLo{=J~Vd5Xav`C~E+9cb=F@d+996p*U_>NIc`*>0_E;jT0P; zBW}LO%K7*RVcZjJh7|=Gx)8or!Kbr?wi!7c_tpUpUbR%hpH_6xS}*xHX)Q+B?TR%? zUvR&{5f*~)p|nu8KTGxsIXoD+XfNkAF?sC{rNwp!=5?Ocva`M6^7t`m-$=9FT+W$l z+Z-W&8?f+l-xN?91_U6A3-;?KIZ!i%YbxCXb zaY+)p>8%2;<7%@GKwaM^s0)3Bg%o7#qRm_A{-JiLkMgTw96KjOkYMRDwchANlrFY2 zk&0k*2&eABi4`@G^y@GD_vNn$8J@dqgHEc=-^#=!4gMyN5>9th9$X5wSlDCgimjZX zShGb1=rTYCQ>|&NdK#%{nG2?``pSFibuHb4weQ?4W>OnC!+*B)W#?%!eqzG4{4l}OdpUJ>fQ+TJemWR;-#aLkE1wdt zdguJ%l~H$;aE%Wpc(Do{X%_msou|?|n?3M9untFriIhiRw<4zX)o49I33&}2Y8^At zrWk`)??C1?hy;x#WpqwOa>K~_-P!Y4rzRbx(pRmGj1le_Wa9^p2j6Ia`I2<2WZM`K zEH3O%7^oPbMUKmWHrXE+$o#3}kIK6BSv4qO<-8wJEShBRGeiD)a| zDJY%{7krf@SSK?_A##RfL+c80*jcy#%wMf|q>*yqf zSDhmHw(9!s+$au75}Moe039AfNlnB_nx;=>X>ii*KMv^@q6AcL?}qMK#?sf@3-EX_j9{Yc3P zMt8Ld2H!-?jrSE!1ADdNNr-Lbg>dW8 z2Aa0-FZK=h$Uq*ott})khd^XG1LFUWMOl*$B<$cZB884uX^memCK{Icxx zs7{}m2IV_E4;cRdO8|m>GR`$4LuV4T{%h1`QRP1Ze>`OKJgNGzE?tDb{4*e4WY5o{Z|7&A~dgCSXqUt?$7798;VRYqp;po zr+GP^c#kcVp9RKX5ubxBO*TrvhiI6h{D{z}5v997qC=B2G zOxXRXqd?O~KQ~;^?l?mRH$5eu`od8!lrWU|gdg8INA~ePzna45G%XEYh(@eW%+gXU zcpvTL(B@F3^t2wCK0XpyR-iNgXCSX|S$%k0=8WJi#0gAbOKMQSV1 z%~ZM9_-d0dshGz`#t4?4YWotq>y`$~EuUv*%8|#{Qr$9NfPRIWr*%49Y4V1?;MC^? zWRs`NCP0Zb{*H@diFS|-Q0xARn9`iVSz79iw^pA*E?U;TL&*yC%jos%PIdOoU4zI9 zpJJa$2Fk8Q5-_+SH!2g$t!{e5+}vf_9VBOGcV}e3zN@CHUq1_f5*dafLG;7Ur6Eb6 z*{~@HYT&T3^oG-$)RwiEB;E$xehKKvdmQOZS^q<;wdX+Ru($t$dJoFjbpernHVzIB zk2wL*+{od8UfC9IW6aR(l@E*2+}ExlUU)tk!O(EypkQXQ*IWn^Hr_B-w=Cgt?t-mr z!@kYP^Ab*HUCGdOvM-+huqx74ls2BUF+^4lA?uv%-0-C|tl!igv3e4W$}q4)z7&=l zJCGSI`c)Yd`83V_3dT9N{aVilV^qbsjz?$4(}UQ(@R`%O7Sb9#+{V>=TSCol?6o^F zdtW=ZC*$GEzlErqb3Q9W^U!$BS+L+<`(8MQm2Gg6rrY)sc52@+cq81MH-RG*p^`!hMA4cKEL-mzsKYE=l74_ zU*E^$y#M(8@p<1~@8{?Jetq8W*ZXy8x)viD`Y9+wA0;XF%aoqY(Ow2w*uN0!yFp=Q ziyI;$mn3qG)l}c$aqL`AJ+rUWWP7eQ`5WDS^aZ5ra1OjG-8v<`Kk$Km8&S z5!*+!V~`0%kZmrwe&rToIrL4BLC`5pqyYIo7$if23jQVkkGulF%I z8ye!(>Ab8d=lS=)3uppGy|7kD^|T22*6|c!MLpV5^+kec?DoohOJ&=i6Y;^JO}kb5 zPT2Y|)fUcrToq{zC)V{94A<7IW0(9`J(;D$4HY+;pR7!})3Ox@g0(2c4!-T%m)CTk zku7?cK-UHbR?B%FPEjMKp`7fK#L#U*oUg1+@><@Wam8U8p60G$n3%ae5$WvXrf_&Z zowSzWV1Sw(%HyvR-i|!J$iVPZGxv(BNwuiSW%=s24)9zw7NVnT?rA)g^cNuY1(Htp zUGIZWx1M;79Z)rHDt?2$-dxp-71mG2wmpb}W=SFYeb8YU%+A;MEA@0bC2y4%jcZH-=NBBb7O>B^hfUz z$#H5e!YoBY;6E@|Y6M?zc!8V-L*EG6eY4qd&$FTGEeBs%4_T_8`%Q}HvG7yTl!m$e zG<1TdGoa8jWh3hFU`+Aoi zSKxh2*y`e#A?C6giymMnikiErJ_!YSPEv(o%ouiMl?%p#m9(9-?_zJ=b^&7PsANxG z*S*hF6aP?eUO+xm{2l*dRzOV9#^W2@=y2Cqm=1 zR~&tZ4DNB&yy)*k$sPM0x2dYpA!z&laRNer59+}azm1xQri`1V;@h~OzkzPqk#>RL zqSZ`am1^0^U8EB*_r=F;U#!}67M|6pHe&Cpz{-H8j_-zcFKDlnEqI5{ZKDL zCQk(GZZMtQcRY8A-o*#byNhCXeBxn`CK_H{h zCcN=O<%`(`q4>r9xHM!|PkZ*lz1KXite))iFG%Hno8t2S7%3R=XOKGH4h|dl9e#6s zi}Y^C(Ni2QZhx%p^J7|;Ezz(jGnkiVa>Tla1yF9l1=BWOZFiR!dbxZ_8Hm5*`{3Ta zd$EeHXU<_?xC`-!SX?R@ekCGv%8JsxxkM~|sz?Y_6=c!;7Hz3m+wP9&C)@ zSs}Eca`x4po|AO#wwSs5 zxS0>-cW(Ec&3Uy%h^I@s>)F!bkQk1{5y_6`$9wvna=hHw^YS;lv09y}b4#5Y{3A05 zA`9%-i;Hh-JFA=T7`MhJ2o=stzs>imQxe{>=z?SK3}X_Q!Nwam^w%6VPv4wxa{Dc5 zAm6-uY|DV9@f?q=iYPss4Fp`1Ce=7M^OTwP!@{{mE>%BuQ=gtRnp&N*?ib>@pOWCADWdPU2_sjyX~% zLd<4Hippolhlhu&>szY(XCxMVN$(^>`i6o_QJ>!WY`>kp4B;Ey21UK`*uxOVSFFuH z0Y^B>x9gaG9?s8fp*^!c7PxGtPMt1GnRZ~YcT^3ND$+to-oZDXjvAa!(c_pN8ofyi z^}GA?%j)tEcZeB?i{IVZ4Ue-cZW@FwaHcKIz8CmfVLk1&6>&ow3-gel|%05fH!nFP>{4dSBCPKcp~vYlYNwozy(eODDUPDFCT1^Q^{ zW>8t^1j$kuSYygh4;cP-+<DEYb7tm=QPRfN zD~g{*fhTR^7kVD)o`f>%$vxu>80O(OgToW$D=$_Rc-bAjnCWmD)1PsK=~h0T&sB6( zMFGPn8(A&wZ@4Ud*m=({E6Hb;9oQZ6kfkB_pw>R>Sf7WrBbqyGYc(qv%<~Z)*|RvG zQEs=S&175Ly5_S{IKS}E+i*Q&VDJ4D9BIltTqcG2a)*z-?c+0Wz39AWL}?KEAMD&#jodRx9-W6PEVs zKs%k{Mc4Z+#~f)<=RIJ)nc(;Vb3LH~?f-MaSn(O~gX5{d8pf;1@F<2=eeYgumlE}u3PLCN}3oTXnQ+W8*?=(+ZH_JWy=d9ud>oV1FHxk-%Itrs0bo<;e z#QtsmO5c>BQHuouc@c?BWPOTA)E3cH$=41|9>P-swlJA{I=7f!#`9czwq<#EZ!UQE z?edQIcc+?!(r+P@{sL9_Z>QDog;<_>w_QMRFF_MU#3n zOcr?-|0s~UeO$6|utYchU{4p5q3c%l9lrg1zUq{@RI-hsl`YQC*SL=B_o?<(D=#W- zui4=}$AQ?I<2C#onavbepR@2Lt!21kG9e#%x$#L{XavRSApY(i2_d`ym7#EL&aZf1 z%$g)x?u#C{+8-Yi4di9lRe`Q$vw2VM(fz%7nCqeeT)T>CIo2N zKzSFY%`x_L*q`j6Tb@Pp`p;^;ZWzUFvM_62T5so^sM2Sa#GZ@PrB=74SPL)yn?t9axl4_MxiJ2 zFv2=ff3`kS-rOhI_IXcs(3P=keok^>7K|nkIk7RF7kc=T&uR)2jAFncd~?!VI{?pETv4VnlaQJp%cf|v8a;OVl=;l57^r_Ffc z+qa!qK3-8p5v?8gxXsT5XSt`f7wB8wP2?RtI;F>#8nzN z2*|ht$pS*$-SQWf?hPTb5Pe}^!vqoB%pK#+f2--%(?EPNOLC84t#~UM+>h=2eAN1m?#+odqoYQLP*V<*`6pUJjsG~WqW>pxaN_D`a4A#kY{03Pu+ZeS z>tRZV1R!AUaK7X$CZ@I2Xh|Ov_5L}E9GPo=ugrjjpZ&%d0xEA(XU^;~!)n_Wm*RLL zf|x5f{!)g@{c4NPwnv}P!ECyBg_uja(1ocgnF}y-oV2FK$Dpw`%ss56(B@4P_kx_uqw)L3R0tnvN>f!!T@c`)Cc3%I?lNg%mlKcjWZR>I}_3=x}h**Q?zUCmFR zi`+EwEbZmxNLm=UqgP+Xq+Y6DEW(MbbuCR>D^-*}8NHTm1%{?u?XrxbUG`ENFHAzI zgx4M~3az6?P94Vfhcg|r_oD9d=fg(DwD!Vhql1Hk8~>qf3(Q1c<7d=!BY5LKnthxgxcCuDfldTvZ0xk`I%B#Rp6@}*Qi%Y&Uaz*o#X=!Pu zD*^pj?U&17H5vJB!$|RPdW3eZjS5W&J@z^;5{Wc{Anu)5R!dYD zf24lj3k)M0dW$$^RjB<)WXzj2(YhEvzwg3}S>K(g@dR$duoy1S05xE4;7`Nd!>S4A z-L_&tGnLz2>&o#8w5sT*wjf{*S<&;P+1$w#Is~H3W zbdw(^?5r;tv}9haV&4k*7OH{e80|Gp4Ds;OQ#?_FAXjC13~;f+{Rs{X{W`bb*x1FL zTC+Iy+(o7Asv3m1$1BRm!gIL+v(ck)f5jhKTPZ+|A>oHha*XPOx|GsCLqjJoX{^Jy zNEYc%%Q{oMjxJEy);2aYPm?S~#?e#={k0w{^%^b|>t z0MEKiv$%F?vhN!ZpMxe^e&2}I=m9pHW$aHMZ9Tla?&r21$<_+%gh?c`lZlmx1& zubULQvrR6UnqBYYqi^57C13`8PbypTG6OB`O;?*L0{$ro;<@toV}m%k$2wb0%oImQ zS%SF^EZQ8fCS*W9dPfbSd>Gp?GOYzfx8eWvG@}Z85;P~_;Y3%{mn-^{Aj(=LBY-h2 zeg_`b?yCuDE$s{-bUk93Iw=Z zr$MHmzh8e$TJFMuWiV`NeqkZKRn))$a)h7*^!#hpuXf1tH1sXxMn4GY2Se`j^~%Fj zSc^6q1O3FsuZmw@x)JEL7dlCN<&kYTkqa|a0leh^Z3K9Od`@tqk=wHr#?{J)Nqv5R zmEYo+2v6$-g~_w;pT^_pM7Raf`51Z_TT zM5Z7Rz=AleeBZYVG+U-=d@0_$-9f(ZvaO-y4`I QHI`s@wkMB}5BXmHANFD2WdHyG literal 0 HcmV?d00001 From 62741761231b430fc4bd475ee91e0835ba33867b Mon Sep 17 00:00:00 2001 From: Ryan Cartwright Date: Tue, 31 Dec 2024 10:41:19 +1100 Subject: [PATCH 3/9] spellcheck --- docs/guides/python/llama-rag.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/guides/python/llama-rag.mdx b/docs/guides/python/llama-rag.mdx index 6a5faf409..732ff7105 100644 --- a/docs/guides/python/llama-rag.mdx +++ b/docs/guides/python/llama-rag.mdx @@ -459,7 +459,7 @@ RUN --mount=type=cache,target=/root/.cache/uv \ uv sync --extra ml --frozen --no-dev --no-python-downloads ``` -We will also add a `HF_HOME` environment variable in the `.env` file to make sure our hugging face cache is in a readable/writable directory on the cloud. For Lambda, the /tmp dfirectory is the best place to store these types of caches that require reading and writing. +We will also add a `HF_HOME` environment variable in the `.env` file to make sure our hugging face cache is in a readable/writable directory on the cloud. For Lambda, the `/tmp` directory is the best place to store these types of caches that require reading and writing. ```text title:.env PYTHONPATH=. From e874a685e692686877ce86a0c798b7a909831335 Mon Sep 17 00:00:00 2001 From: Ryan Cartwright <39504851+HomelessDinosaur@users.noreply.github.com> Date: Tue, 7 Jan 2025 11:00:06 +1100 Subject: [PATCH 4/9] Apply suggestions from code review Co-authored-by: Rak Siva --- docs/guides/python/llama-rag.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/guides/python/llama-rag.mdx b/docs/guides/python/llama-rag.mdx index 732ff7105..6e2f6b045 100644 --- a/docs/guides/python/llama-rag.mdx +++ b/docs/guides/python/llama-rag.mdx @@ -76,7 +76,7 @@ We'll organize our project structure like so: ## Setting up our LLM -We'll define a `ModelParameters` class which will have parameters used throughout our application. By putting it in a class, it means it will lazily load the LLM, [embed model](https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings/), and tokenizer so that it doesn't slow down other modules that don't require everything to be initialized. At this point we can also create a prompt template for prompts with our query engine. It will just sanitize some of the hallucinations so that if the model does not know an answer it won't pretend like it does. We'll also define two functions that will convert a prompt or message into the required Llama 3.2 format. +We'll define a `ModelParameters` class which will have parameters used throughout our application. By putting it in a class, it means it will lazily load the LLM, [embed model](https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings/), and tokenizer so that it doesn't slow down other modules that don't require everything to be initialized. At this point we can also create a prompt template for prompts with our query engine. It will sanitize some of the hallucinations so that if the model does not know an answer it won't pretend like it does. We'll also define two functions that will convert a prompt or message into the required Llama 3.2 format. ```python title:common/model_parameters.py import os @@ -525,7 +525,7 @@ config: # We add more storage to the lambda function, so it can store the model ephemeral-storage: 1024 ``` - +When you set ephemeral storage above the default 512 MB, there may be additional charges based on the amount of storage and how long your function runs. We can then deploy using the following command: ```bash From bc9232e4b94aa9f9802edbb8eec987f12bb3e8a0 Mon Sep 17 00:00:00 2001 From: Ryan Cartwright Date: Tue, 7 Jan 2025 13:44:10 +1100 Subject: [PATCH 5/9] formatting --- docs/guides/python/llama-rag.mdx | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/docs/guides/python/llama-rag.mdx b/docs/guides/python/llama-rag.mdx index 6e2f6b045..7635c2aca 100644 --- a/docs/guides/python/llama-rag.mdx +++ b/docs/guides/python/llama-rag.mdx @@ -525,7 +525,13 @@ config: # We add more storage to the lambda function, so it can store the model ephemeral-storage: 1024 ``` -When you set ephemeral storage above the default 512 MB, there may be additional charges based on the amount of storage and how long your function runs. + + + When you set ephemeral storage above the default 512 MB, there may be + additional charges based on the amount of storage and how long your function + runs. + + We can then deploy using the following command: ```bash From f188263cc95150f10c8704d068e20f0cf7d3001f Mon Sep 17 00:00:00 2001 From: David Moore <4121492+davemooreuws@users.noreply.github.com> Date: Tue, 7 Jan 2025 15:11:45 +1100 Subject: [PATCH 6/9] update date --- docs/guides/python/llama-rag.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/guides/python/llama-rag.mdx b/docs/guides/python/llama-rag.mdx index 7635c2aca..610e54df1 100644 --- a/docs/guides/python/llama-rag.mdx +++ b/docs/guides/python/llama-rag.mdx @@ -8,8 +8,8 @@ languages: featured: image: /docs/images/guides/llama-rag/featured.png image_alt: 'Llama RAG featured image' -published_at: 2024-11-21 -updated_at: 2024-11-21 +published_at: 2025-01-08 +updated_at: 2025-01-08 --- # Making LLMs smarter with Extensible Knowledge Access From 4bbb14a336d374b2c0d567e548d3428c675c77e3 Mon Sep 17 00:00:00 2001 From: David Moore <4121492+davemooreuws@users.noreply.github.com> Date: Mon, 13 Jan 2025 14:09:26 +1100 Subject: [PATCH 7/9] Apply suggestions from code review --- docs/guides/python/llama-rag.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/guides/python/llama-rag.mdx b/docs/guides/python/llama-rag.mdx index 610e54df1..894447385 100644 --- a/docs/guides/python/llama-rag.mdx +++ b/docs/guides/python/llama-rag.mdx @@ -8,8 +8,8 @@ languages: featured: image: /docs/images/guides/llama-rag/featured.png image_alt: 'Llama RAG featured image' -published_at: 2025-01-08 -updated_at: 2025-01-08 +published_at: 2025-01-13 +updated_at: 2025-01-13 --- # Making LLMs smarter with Extensible Knowledge Access From d6b8670a5dad575903a0bc2a7b3551b76ebb7064 Mon Sep 17 00:00:00 2001 From: David Moore Date: Mon, 13 Jan 2025 16:48:22 +1100 Subject: [PATCH 8/9] add missing connections logic --- docs/guides/python/llama-rag.mdx | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/docs/guides/python/llama-rag.mdx b/docs/guides/python/llama-rag.mdx index 894447385..41e3f81e6 100644 --- a/docs/guides/python/llama-rag.mdx +++ b/docs/guides/python/llama-rag.mdx @@ -276,10 +276,11 @@ uv run model_utilities.py Let's create our resources in a common file so that it can be imported to the subscriber and chat modules. We'll create a websocket which will interface with the user for prompts and create a topic to handle the backend query engine. The websocket will trigger the topic on a prompt message, which will trigger the subscriber to handle the prompt. Once the subscriber is finished it will send a response to the socket. It is done this way with the topic so that the websocket doesn't time out after 30 seconds, as most queries will take longer than that to process. ```python title:common/resources.py -from nitric.resources import websocket, topic +from nitric.resources import websocket, topic, kv socket = websocket("socket") chat_topic = topic("chat") +connections = kv("connections") ``` ## Use the resources for querying the model @@ -287,22 +288,29 @@ chat_topic = topic("chat") With our LLM downloaded and given the context documentation for querying, we can use our websocket to handle prompts. The main piece of logic here is publishing to the chat topic ```python title:services/chat.py -from common.resources import socket, chat_topic +from common.resources import socket, chat_topic, connections from nitric.context import WebsocketContext from nitric.application import Nitric publishable_chat_topic = chat_topic.allow("publish") +write_delete_connections = connections.allow("set", "delete") @socket.on("connect") async def on_connect(ctx): # handle connections + await write_delete_connections.set(ctx.req.connection_id, { + "context": [] + }) + print(f"socket connected with {ctx.req.connection_id}") return ctx @socket.on("disconnect") async def on_disconnect(ctx): # handle disconnections + await write_delete_connections.delete(ctx.req.connection_id) + print(f"socket disconnected with {ctx.req.connection_id}") return ctx From 752d96451693a011938a4e55b495a4e6d287dc22 Mon Sep 17 00:00:00 2001 From: Ryan Cartwright Date: Tue, 14 Jan 2025 09:30:28 +1100 Subject: [PATCH 9/9] add note to copy original dockerfiles contents --- docs/guides/python/llama-rag.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/guides/python/llama-rag.mdx b/docs/guides/python/llama-rag.mdx index 41e3f81e6..d4b4b259e 100644 --- a/docs/guides/python/llama-rag.mdx +++ b/docs/guides/python/llama-rag.mdx @@ -431,7 +431,7 @@ Nitric is a cloud-agnostic framework designed to aid developers in building full ## Get ready for deployment -Now that its tested locally, we can get our project ready for containerization. The default python dockerfile uses `python3.11-bookworm-slim` as its basic container image, which doesn't have the right dependencies to load the Llama model. So, we'll start by creating a new python Dockerfile which uses python3.11-bookworm (the non-slim version) instead. We'll keep the default dockerfile for our `chat` service but use the new Dockerfile for the `subscriber` service. +Now that its tested locally, we can get our project ready for containerization. The default python dockerfile uses `python3.11-bookworm-slim` as its basic container image, which doesn't have the right dependencies to load the Llama model. So, we'll start by creating a new python Dockerfile which uses python3.11-bookworm (the non-slim version) instead. We'll keep the default dockerfile for our `chat` service but use the new Dockerfile for the `subscriber` service. Let's copy the contents of the `python.dockerfile` into `model.dockerfile` and make the following changes: Update line 2: