Question
I often find it is best to see the full request sent to the model when trying to understand LLM responses. Examining message parts doesn't work to well for me. So if I want to see exactly what is sent to the model, is there a way to do it?
Thanks in advance!
Additional Context
No response