When processing the response of an LLM during a task, it should be possible to return as a task response a list of next queries and which agent should be executing the query. This would allow for example the ability to run multiple personas/roles as agents and have them process requests.