Does hierarchical process even work? Your experience is highly appreciated!

The sources of my doubts

All the hierarchical process topics I found have no responses:

I myself bumped into a blocking bug that reproduces itself reliably: Agents fail to call each other (and where to edit the system prompt)

Additionally, I found that the manager agent does not interpolate the query, so I see no way to make it decide which task is relevant to the current problem and which is not:

crew.py

  return Crew(
      tasks=self.tasks,
      agents=[ # cannot use self.agents because it includes self.coordinator()
          self.content_planner(),
          ...
          self.chief_editor(),
      ],
      manager_agent=self.coordinator(),
      process=Process.hierarchical,
      verbose=True,
  )

main.py

inputs['prompt'] = 'I am in a writing mood. Inspire me with some ideas for ongoing content pieces'
Publishing().crew().kickoff(inputs=inputs)

agentops log:

You are Content Planning Coordinator. You are a skilled coordinator who excels at prioritizing tasks and ensuring that the team's objectives are met.

Your personal goal is: {prompt}
You ONLY have access to the following tools...

...

Review the content backlog: <the first task from tasks.yaml. The task is of another agent>

It offers the coordinator agent a task that is explicitly connected to the content_planner agent. This gives us a clue :bulb: of how to make it start with the correct task. I’ve conducted that experiment too, but the results were even stranger: an implicit agent ‘Crew Manager’, defined by the framework, got the same task and the full list of tools of the task’s correct agent. So Crew Manager attempted to solve a task of Content Planner by himself. :see_no_evil:


Question :raising_hand_man:

Does someone run a crew that decides which tasks to use and which to bypass as non-relevant on its own?

Maybe you use something other than the hierarchical process?


P.S. I’ve completed ‘Multi AI Agent Systems with crewAI’ and ‘Practical Multi AI Agents and Advanced Use Cases with crewAI.’

There are lessons on hierarchical process, but the lessons do not render how those crews work. Following the course steps you simply:

  1. define a sequential process (this is the part where you still understand what you’re doing, thanks to preceding lessons)
  2. switch the param to hierarchical (stepping into terra incognita)
  3. pray that things will arrange themselves into something relevant (sometimes they do)

So far the only way I have been able to get hierarchical mode to work is by NOT defining a manager agent but just defining a manager_llm and that is all. It seems to take care of the rest.

	return Crew(
		agents=self.agents, # Automatically created by the @agent decorator
		tasks=self.tasks, # Automatically created by the @task decorator
		process=Process.hierarchical,
		verbose=True,
		manager_llm='ollama/llama3.2'
2 Likes

Thanks!

But does it always execute all the tasks, or is it picky (based on the inputs)?

If it executes all, then why not sequential - what’s the benefit?

Also, do you see error messages in the log like ‘cannot call another object, because the object {“description”: “…”, “type”: “str”} is not a string?’

I have limited experience and some of the bugs mentioned could be my own fault - but I will share the following observations:

-turning on verbose mode and piping all output to a text file can reveal much more good stuff that does not necessarily make it into the ‘final’ report stored by the manager. Reviewing that file will show what the Manager is trying to do and if it was successful at getting a response or not. Be sure to look at times when the manager has a ‘Thought’ about what it is doing. That is where I found some rich stuff.

-sometimes the interaction between the manager and a coworker does not go as planned; and instead, it will cycle or timeout leaving a task incomplete or returning with a very short answer. There is one reason, so far, that I observe this happening: that within the Manager request to delegate to a coworker with a task and a question at the same time, there is an error like the following:

Tool Output:

Error: the Action Input is not a valid key, value dictionary.

Unfortunately, this occurs when it is really getting going and getting deep into the problem. The task it was assigning and the question it was asking are really getting to the core of the problem and show it has learned from previous levels since the beginning.

But even after these errors, the interaction between the coworkers and the manager are impressive even considering I am still using a local LLM for testing and not deployed out to a production environment - once i was able to see all the stuff going on in verbose mode. Verbose mode was set here:

return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.hierarchical,
verbose=True,
manager_llm=‘ollama/llama3.2’

To answer the other part of your question, it seems to iterate in this mode more than in sequential mode, and I am seeing benefit as the manager parses answers from one coworker into questions for another coworker.

One additional note. Once I switched the crew LLM over from a local testing ollama to a production Azure hosted OpenAI model, the behavior of the Manager agent changed remarkably. It was clear that the Manager was really struggling with the format of the schemas and even developed a Thought that it was having a recurring problem.

“Agent: Crew Manager
Thought: Since previous attempts to delegate the work have failed, I will now simplify the descriptions even further, ensuring the language is direct and clear.”

“Agent: Crew Manager
Thought: Since past attempts to delegate the task have failed due to ongoing validation errors, I need to ensure the input strings are as clear and direct as possible. I will focus on making everything concise and precise.”

The larger LLM still gave a better result, and was orders of magnitude faster, but it was interesting that the delegating-of-tasks was hanging up so much. In this case the Manager asked much less questions of the coworkers and just kept trying to adjust the delegation to make the handoff work.

Hope this is helpful.

1 Like

One more note - the errors the manager is getting while trying to delegate a task to a coworker appears related to it not following the format of that tool:
‘coworker’: {‘description’: ‘The role/name of the coworker to ask’, ‘type’: ‘str’}

Instead it is passing ‘coworker’ : ‘name_of_coworker’ as the input to the tool.

I am so grateful for this post. I had almost given up with “hierarchical” but this got it to work.

1 Like

I am having an issue getting the manager to delegate tasks to coworkers. It keeps saying that it is trying to use a delegate to coworker tool that doesn’t exist. SO, using the manager_llm method, do I still keep my manager agent, but add it to the agents list?

Hey Antonio! Yeah, the hierarchical process and delegation in CrewAI definitely seem a bit tricky, judging by how often it pops up here in the forum. So, I’ll take your question as a chance to throw in my two cents (okay, maybe more like twenty cents!) about some details on how the hierarchical process works in CrewAI. Bear with me, and I promise I’ll get to your specific question.

Right, so that tool does exist, but you aren’t the one who creates it. Let’s walk through how a hierarchical process gets set up:

  • Crew(process=Process.hierarchical): This is the magic switch that turns on the hierarchical logic.
  • Then, choose your Manager: You’ve got two options here. Either CrewAI gives you a default manager agent, or you provide your own custom one. You can only pick one of these:
    • manager_llm=<YOUR_LLM_INSTANCE>: CrewAI whips up a default manager agent for you (using the role/goal/backstory found in crewai/translations/en.json), then calls the internal _create_manager_agent method to get it ready. It’s like the classic “If you cannot afford a lawyer, one will be appointed for you.” Great.
    • manager_agent=<YOUR_CUSTOM_MANAGER_AGENT>: Here, you define your own manager Agent instance with its specific role, goal, backstory, and crucially, allow_delegation=True. Basically, “I can afford my own lawyer!”
  • agents=[agent_1, agent_2, ...]: This list should only contain the agents who will actually do the work delegated by the manager. Please, do not include your manager agent in this list.
  • tasks=[task_1, task_2, ...]: These are the high-level goals your Crew needs to tackle. In the hierarchical process, the manager figures out which worker agent gets which task (or a sub-task derived from the main one). Things are starting to click, right? Please tell me they are!
  • Then CrewAI does a crucial step behind the scenes when setting up task execution (check out the _prepare_tools and _update_manager_tools methods):
    • It automatically creates instances of DelegateWorkTool and AskQuestionTool. (Hey Antonio, see where that tool from your error message comes from now?)
    • The descriptions for these tools are generated on the fly (again, take a look at crewai/translations/en.json) to list the available worker agents (e.g., “Delegate a specific task to one of the following coworkers: Researcher, Writer, etc.”).
    • It injects these two tools into the list of tools available exclusively to the manager agent for that specific task execution cycle. This is why you don’t define these tools yourself – they’re built into the hierarchical mechanism.
  • When the manager agent decides it’s time to delegate (because its LLM generates an action to use the Delegate work to coworker tool), a temporary Task gets created, and the system looks for the specified agent (coworker) to handle it.
  • The manager agent gets the result back and continues its thought process – maybe delegating more tasks, asking questions (remember that other secret tool it gets?), or eventually putting together the final answer for the original Crew task.

See how it’s a process where lots of gears need to mesh perfectly, and the whole coordination needs to run smoothly? This level of complexity should be something the project demonstrates is worth the risk. As I mentioned in another reply, this behavior is truly agentic and has stochastic characteristics – meaning it’s much more probabilistic than deterministic, you know? Often, that’s the opposite of what you need to solve the problem at hand. Many times, a clear, well-defined process yields a much more predictable (deterministic) result, and it’s up to you to weigh what’s best for your actual use case.

In terms of implementation, I invite you to check out this other thread where I tackled the same topic but provided two functional code examples: a traditional version (using Process.hierarchical) and an approach using Flow that gives you more control over the process.

Hope I’ve been clear enough to make things less confusing. Good luck!

1 Like

As an update to this I found the latest version of CrewAI a lot better. And of course Maxs great advice on the workings. I wanted to include the code I used to solve this once I updated to 0.114.0

Here is the method I got it working. In this you define the Manager Agent in a variable and pass it. You might want to play with the number of iterations. For complex tasks you might want to increase to the hundreds… But be prepared for this to cost in LLM $ :slight_smile:

	@crew
	def crew(self) -> Crew:
		"""Creates Research crew"""
	
		manager = Agent(
			role="Project Manager",
			goal="""Efficiently manage the crew and ensure high-quality task completion. 	""",
			backstory="You're an experienced manager, skilled in overseeing complex projects and guiding teams to success.",
			allow_delegation=True,
			max_iterations=25,
)

		return Crew(
			agents=self.agents, # Automatically created by the @agent decorator
			tasks=self.tasks, # Automatically created by the @task decorator
			verbose=False,
			planning=True, 
			manager_llm=LLM(model="gpt-4o"),
			manager_agent=manager,
			process=Process.hierarchical
			
		)