Crew method as tool

Is it possible to use a crew instance method as a tool for a Task/Agent

Like a crew that calls a crew?

I have solved my problem, when the solution is working I’ll update here, or in channel.

1 Like

This topic can be closed now. See here instead

Please do I’m very interested in that

Seems like you would just create a custom tool, which would route from one crew, to another crew. At first I was thinking it might not be necessary, since agents within a crew have roles, and those roles could be considered a form of compartmentalization, however, I’ve often found that the “chain” doesn’t always go as expected, and sometimes an agent puts in their two cents when it’s not really their role, usually when you have more than one agent with delegation of tasks enabled. I could see this being useful in more ways than one, to ensure each crew is focused on a very specific goal, like research, coding, testing, etc etc. A delegate in the crew essentially sees other agents as tools. (correct me if I’m wrong)

better still would be to use pipelines - crewAI Pipelines - crewAI

1 Like

I was leaving pipelines until I moved on to larger crewAI projects, but I will think of some way of having a play with them.

@Dabnis as you have noted in your video proj, keeping the data in structures can minimize the glut in the context. In your work on prompt and crew optimization, can you rewrite the agents, tasks, etc and perhaps even some of the structured data and pipe that into a new crew, new llm instances, thereby jettisoning a lot of the built up context glut?

@moto
Many of our discussions end up talking ‘context’. As yet I’ve not dived into the code to see how CrewAI handles/exposes, if at all, the ‘context’.

FYI: The concept of saving blocks of context/task output for recall later could be achieved by instantiating a dictionary on the crew instance within init, a and methods to save & recall by by ‘tag/bookmark text’ :slight_smile:

Re: that repo, note how I use a ‘GetNextVideo’ tool to process each item by itself, without this the context would have been flooded with video detail objects, **REM I have 1000+ videos! Hence the GetNext tool

So if you use say ollama and look at the full log of the context being passed each call to the llm how polluted is it getting.

TBH: I need to find the best way to monitor the context, with our without ollama.

I use lm studio as an endpoint for local llm’s as it has some more features I like.
You can set the ui to developer mode and see all the detail in the logs real time and it is amazing how quickly the context fills with junk. Or turn this on
litellm.set_verbose=True
os.environ[‘LITELLM_LOG’] = ‘DEBUG’

1 Like

I’ve used LLM Studio in Windows, I’ll see if I can put it on my :Linux/Dev box

Got it here, thanks :slight_smile:

Just using the verbose and debug in litellm really gets you any easy look at the post requests to the llm. Hydration planner context e.g. gets huge.