Hierarchical process always ending with the error

I want to create a Agentic flow where a manager Agent will distribute the tasks (tasks can be identifying the meeting details if user has specified details of participants and timing if not so it should ask follow up question whom he wants to meet with) to the coworkers.

class MeetingAgents():
  def __init__(self):
        self.GROQ = LLM(model=os.environ['MODEL'], temperature=0.7, api_key=os.environ['GROQ_API_KEY'])

  def meeting_intent_analyzer(self):
      return Agent(
          role='Meeting Detail Checker',
          goal='Analyze the user query "{query}" to check if it includes both participants and scheduling time. If any information is missing, prompt the user to provide it.',
          backstory=dedent("""
          You're skilled at parsing user queries to identify key meeting details like participants and scheduling time. 
          Your task is to ensure all necessary information is provided to move forward. If details are incomplete, you 
          engage the user (here user will always be Jessica) with clarifying questions to collect the missing data efficiently.
          """),
          verbose=True,
          llm=self.GROQ,
          allow_delegation=True,
          max_iter=5
      )

  def meeting_info_extractor(self):
      return Agent(
          role='Meeting Information Extractor',
          goal='Extract and validate the meeting details (participants and scheduling time) from the query "{query}" and format them into a structured JSON object.',
          backstory=dedent("""
          You're responsible for extracting and validating meeting details from user queries. 
          Known for your precision, you ensure all required fields like participants and time/date 
          are correctly identified and formatted. Your ability to create structured data simplifies 
          the scheduling process for downstream tasks.
          """),
          verbose=True,
          llm=self.GROQ,
          allow_delegation=True,
          max_iter=5
      )

  def manager_agent(self):
      return Agent(
          role="Task Workflow Manager",
          goal="Efficiently manage tasks derived from conversations, including downloading meeting summaries, analyzing overall sentiment, setting up follow-ups, \
                updating CRM entries, and creating proposals. Ensure that each task is assigned to the appropriate agent or tool, completed accurately, and delivered \
                on time. Maintain clarity and efficiency by treating each task as distinct yet interdependent, optimizing the workflow for seamless execution.",
          backstory=dedent("""
                           You're an experienced task manager specializing in processing actionable items from conversations. Your expertise lies in coordinating tasks such 
                    as generating meeting summaries, evaluating sentiments, scheduling follow-ups, updating CRM systems, and drafting proposals. You ensure that all 
                    tasks are executed efficiently, meet the required standards, and align with the overarching goals of the organization. Your role ensures that 
                    conversation outcomes are effectively translated into meaningful actions. Your role is to coordinate the efforts of the crew members, ensuring that 
                    each task is completed on time and to the highest standard
                           """),
          allow_delegation=False,
          llm=self.GROQ
      )

But my process either goes into infinite loop or if I’m setting the max_iter then getting error.

  • This is my Crew
crew = Crew(
      agents=[
        meeting_intent_analyser, meeting_scheduler
      ],
      tasks=[intent_indetify_task, scheduler_task],
      verbose=True,
      # memory=True,
      # embedder={
      #   "provider": "ollama",
      #   "config": {
      #       "model": 'mxbai-embed-large'
      #   }
      # },
      manager_agent=manager_agent,
      process=Process.hierarchical,
      manager_llm=agents.GROQ
    )
  • Error:
File "E:\worxwide_projects\master-module\meeting-scheduler-module\main.py", line 68, in <module>
    result = trip_crew.run()
             ^^^^^^^^^^^^^^^
  File "E:\worxwide_projects\master-module\meeting-scheduler-module\main.py", line 55, in run
    result = crew.kickoff()
             ^^^^^^^^^^^^^^
  File "E:\worxwide_projects\venv\Lib\site-packages\crewai\crew.py", line 557, in kickoff
    result = self._run_hierarchical_process()
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\worxwide_projects\venv\Lib\site-packages\crewai\crew.py", line 667, in _run_hierarchical_process
    return self._execute_tasks(self.tasks)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\worxwide_projects\venv\Lib\site-packages\crewai\crew.py", line 760, in _execute_tasks
    task_output = task.execute_sync(
                  ^^^^^^^^^^^^^^^^^^
  File "E:\worxwide_projects\venv\Lib\site-packages\crewai\task.py", line 192, in execute_sync
    return self._execute_core(agent, context, tools)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\worxwide_projects\venv\Lib\site-packages\crewai\task.py", line 250, in _execute_core
    result = agent.execute_task(
             ^^^^^^^^^^^^^^^^^^^
  File "E:\worxwide_projects\venv\Lib\site-packages\crewai\agent.py", line 356, in execute_task
    raise e
  File "E:\worxwide_projects\venv\Lib\site-packages\crewai\agent.py", line 345, in execute_task
    result = self.agent_executor.invoke(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\worxwide_projects\venv\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 103, in invoke
    formatted_answer = self._invoke_loop()
                       ^^^^^^^^^^^^^^^^^^^
  File "E:\worxwide_projects\venv\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 203, in _invoke_loop
    raise e
  File "E:\worxwide_projects\venv\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 135, in _invoke_loop
    raise ValueError(
ValueError: Invalid response from LLM call - None or empty.

can anyone help me with this? Please.

This has been recently discussed here. Please follow this topic instead and add your observations there. Therefore, I’m closing this topic. Especially see the following answer: