Is it possible to use a crew instance method as a tool for a Task/Agent
Like a crew that calls a crew?
I have solved my problem, when the solution is working Iâll update here, or in channel.
This topic can be closed now. See here instead
Please do Iâm very interested in that
Seems like you would just create a custom tool, which would route from one crew, to another crew. At first I was thinking it might not be necessary, since agents within a crew have roles, and those roles could be considered a form of compartmentalization, however, Iâve often found that the âchainâ doesnât always go as expected, and sometimes an agent puts in their two cents when itâs not really their role, usually when you have more than one agent with delegation of tasks enabled. I could see this being useful in more ways than one, to ensure each crew is focused on a very specific goal, like research, coding, testing, etc etc. A delegate in the crew essentially sees other agents as tools. (correct me if Iâm wrong)
better still would be to use pipelines - crewAI Pipelines - crewAI
I was leaving pipelines until I moved on to larger crewAI projects, but I will think of some way of having a play with them.
@Dabnis as you have noted in your video proj, keeping the data in structures can minimize the glut in the context. In your work on prompt and crew optimization, can you rewrite the agents, tasks, etc and perhaps even some of the structured data and pipe that into a new crew, new llm instances, thereby jettisoning a lot of the built up context glut?
@moto
Many of our discussions end up talking âcontextâ. As yet Iâve not dived into the code to see how CrewAI handles/exposes, if at all, the âcontextâ.
FYI: The concept of saving blocks of context/task output for recall later could be achieved by instantiating a dictionary on the crew instance within init, a and methods to save & recall by by âtag/bookmark textâ
Re: that repo, note how I use a âGetNextVideoâ tool to process each item by itself, without this the context would have been flooded with video detail objects, **REM I have 1000+ videos! Hence the GetNext tool
So if you use say ollama and look at the full log of the context being passed each call to the llm how polluted is it getting.
TBH: I need to find the best way to monitor the context, with our without ollama.
I use lm studio as an endpoint for local llmâs as it has some more features I like.
You can set the ui to developer mode and see all the detail in the logs real time and it is amazing how quickly the context fills with junk. Or turn this on
litellm.set_verbose=True
os.environ[âLITELLM_LOGâ] = âDEBUGâ
Iâve used LLM Studio in Windows, Iâll see if I can put it on my :Linux/Dev box
Got it here, thanks
Just using the verbose and debug in litellm really gets you any easy look at the post requests to the llm. Hydration planner context e.g. gets huge.