Please explain Manager LLM like I am 25

I understand a planner llm, and then there are llm’s which are built for tool calling, And based on that, we have a rough idea on what llms to choose for:

  • classification
  • tool calling
  • task planning
  • where to use tools and control probability of success with guardrails.

Like deepseek-r1 seems like the choice for task planning. Although qwen2.5-coder is also working. And then for tool calling, we have a determining factor, function_calling_llm .

And as I understand, the task planning llm, is used to control the thought process part of the execution flow.

But what is this Manager LLM, what are the characteristics of such an llm?
What knobs does it tune, from as user perspective, what difference will there by, and if its not user side, then is it for optimizing execution.

I have a thought, such an llm would also have same properties of a task planner llm. When looking for training, we are looking at training task planning, with multiple agents, so that the manager knows how to use multiple agents. Thereby having more control over allow_delegation.

I don’t know if that’s right, also feels more like philosophy than technicality. So if someone could give a rough idea of this part of crewai.

Thanks

Hey Amitava,

I think the issue you’re bringing up might be the same one I discussed over in this other thread here, is that right?

My answer there might not cover all of your questions. If that’s the case, once you’ve had a chance to look over the points I clarified there, maybe that can be a starting point for you to explain what doubts still remain.

Yes, thanks. It clears things up.

I had a question. So for a manager llm, If I were to use it, should I mention the order of execution, in maybe the backstory? So that the Thought Action part can have better predictability?

Amitava,

  • If you choose to pass just an LLM—meaning, you use the manager_llm parameter—then the manager Agent itself will be built internally by CrewAI. It’ll create an Agent with a role, goal, and backstory tailored to be a pretty competent manager. This agent will be responsible for coordinating tasks (or sub-tasks, however it decides) and assigning them to the other Agents (these are the coworkers you define the standard way).
  • Now, if you opt to provide a manager_agent instead, it has to be an Agent that you cannot assign any Task to. You’ll need to define its role, goal, and backstory yourself to best suit your needs. That’s pretty much the extent of the control you have over customizing the manager in this case.

This part caught my eye, and I’m going to repeat something I often say: this is an agentic approach, which means you need a good deal of trust in your coordinating agent. You have to accept that at the end of a slightly unpredictable (probabilistic) process, you’ll hopefully get a valid solution. You might get the big picture of what needs doing, you might see details of what’s currently happening, but you’re not micromanaging every single step.

In most situations, you (or your client) will want fine-grained control. Especially in the corporate world, processes are usually very well-defined (or hey, they should be!), and agentic systems need to fit into and boost these established methodologies. For those scenarios, the Flows paradigm in CrewAI is a much better fit. It features well-defined steps, flow control, and persistence, excelling at capturing structured processes and delivering that better predictability. In this other thread, I actually tackled the same problem using both crewai.Process.hierarchical and then showed a really clear and simple solution using crewai.flow.

Awesome. Thanks. Yeah after reading the thread, I realized too, the manager_agent is not something I can just introduce, I mean, most of the time I am looking for structured output, from the inputs and outputs.
Tbh, writing predictable flows, it is starting to feel like another way to write apis. Especially the controller layer.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.