Hello,
I have a CrewAI project where I defined several tools in order to interact with Microsoft Graph APIs. My purpose is to use a general agent and task, that will understand which tool or tools to call in order to execute simple or complex user’s queries (complex ones can involve multiple tools to be called, even mixed with calling the underlying LLM without using a tool)
Right now the problem that I’m facing is: when the tool output is too big, a lot of time is taken from the agent to reply, and sometimes is stuck on “thinking” for indefinite time.
At the moment I tried different solutions, but the optimal one will be get rid of this so the agent can use the output of each tool correctly.
What I’ve tried right now is:
- Write on a temp file the tool outcome and read manually inside the tools: it works, but then I’ve to implement a tool that wraps the LLM and it can become hard to manage (using the read tool from crewai will put the output again in the context)
- Do not rely on tools but define one agent for each tool, but then I’ll loose some custom approach that I need to integrate in each tool with coding
- Use flows instead crews: the last option, I never used them so if it’s impossible to achieve this with crews, I’m going to study and implement flows
What do you guys think? Is it possible to avoid this problem of the agent be stuck on thinking with large outputs (and some of them are not even so large) or do I need to explore other approaches?
Thanks in advance!