Feature request
Currently, we are not checking the length of the context we are sending to the LLM. We should be cleaning up the context for the LLM to work a little more efficiently and optimize the context as much as possible before sending it through to the LLM.