Is your feature request related to a problem? Please describe.
AI providers sometimes provide ways through the API to set how much "reasoning" one wants to do on each request. We should expose this to the user.
Describe the solution you'd like
A CLI option & Config field that sets reasoning/thinking. For both review and guide.
Take care of models that do not support this knobs:
- Should we fail fast? Should we just raise a warning?