Hierarchical Crews. Manager Agent requires OpenAI

Hi,
I use a non OpenAI LLM and use the llm = parameter in my agents and tasks to assign my llm.
My sequential crew:

my_crew = Crew(
    agents=[senior_researcher_agent,junior_researcher_agent],
    tasks=[senior_research_task,junior_researcher_task],
    process=Process.sequential,
    verbose=True
)

Runs fine.

However my hierarchical crew:

hierarchical_crew = Crew(
    agents=[junior_researcher_agent, senior_researcher_agent],
    tasks=[senior_research_task, junior_researcher_task],
    process=Process.hierarchical,
    manager_agent=supervisor_agent, 
    planning=True,
    verbose=True
)

My supervisor agent:

supervisor_agent = Agent(
    role="A supervisor agent",
    goal="To ensure that overspend doesn't happen and the senior researcher is only used for complex tasks",
    backstory="""A seasoned supervisor who understands the difference between complex and easier tasks and will assign the questions to the correct agent based to ensure costs stay maintainable.""",
    verbose=True,
    allow_delegation=True,
    llm = llm,
)

Returns this error

2025-03-11 08:23:24,739 - 139980965573632 - llm.py-llm:388 - ERROR: LiteLLM call failed: litellm.AuthenticationError: AuthenticationError: OpenAIException - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable


File /opt/conda/envs/Python-RT24.1/lib/python3.11/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py:357, in exception_type(model, original_exception, custom_llm_provider, completion_kwargs, extra_kwargs)
    352 elif (
    353     "The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable"
    354     in error_str
    355 ):
    356     exception_mapping_worked = True
--> 357     raise AuthenticationError(
    358         message=f"AuthenticationError: {exception_provider} - {message}",
    359         llm_provider=custom_llm_provider,
    360         model=model,
    361         response=getattr(original_exception, "response", None),
    362         litellm_debug_info=extra_information,
    363     )
    364 elif "Mistral API raised a streaming error" in error_str:
    365     exception_mapping_worked = True

How are you defining your llm? I assume something like:

llm = LLM(
    model="mistral/mistral-large-latest",
    temperature=0.7
)

Seems like you are using Mistral, have you defined the api_key in the .env file

MISTRAL_API_KEY=<your-api-key>

Which Mistral model are you using?

llm = LLM(
model=os.environ[“MODEL”],
base_url=os.environ[“WATSONX_URL”]
)
Using llama-3-3
This works fine when I run the sequential process and I have no issues,

It only crops up when I use the hierarchical process

So I am assuming there is something there that doesn’t trigger

Using litellm with debug give me this, which you can see shows that it chooses to try and use openai.
I even added this line into the crew defintion: manager_llm=llm,

2025-03-11 10:08:33,205 - 139936302588928 - utils.py-utils:298 - DEBUG: 

10:08:33 - LiteLLM:DEBUG: utils.py:298 - Initialized litellm callbacks, Async Success Callbacks: [<crewai.utilities.token_counter_callback.TokenCalcHandler object at 0x7f4525706410>]
2025-03-11 10:08:33,207 - 139936302588928 - utils.py-utils:298 - DEBUG: Initialized litellm callbacks, Async Success Callbacks: [<crewai.utilities.token_counter_callback.TokenCalcHandler object at 0x7f4525706410>]
10:08:33 - LiteLLM:DEBUG: litellm_logging.py:377 - self.optional_params: {}
2025-03-11 10:08:33,209 - 139936302588928 - litellm_logging.py-litellm_logging:377 - DEBUG: self.optional_params: {}
10:08:33 - LiteLLM:DEBUG: utils.py:298 - SYNC kwargs[caching]: False; litellm.cache: None; kwargs.get('cache')['no-cache']: False
2025-03-11 10:08:33,211 - 139936302588928 - utils.py-utils:298 - DEBUG: SYNC kwargs[caching]: False; litellm.cache: None; kwargs.get('cache')['no-cache']: False
10:08:33 - LiteLLM:DEBUG: transformation.py:115 - Translating developer role to system role for non-OpenAI providers.
2025-03-11 10:08:33,230 - 139936302588928 - transformation.py-transformation:115 - DEBUG: Translating developer role to system role for non-OpenAI providers.
10:08:33 - LiteLLM:INFO: utils.py:2896 - 
LiteLLM completion() model= gpt-4o-mini; provider = openai
2025-03-11 10:08:33,233 - 139936302588928 - utils.py-utils:2896 - INFO: 
LiteLLM completion() model= gpt-4o-mini; provider = openai

Checking the debug track I found this parameter which solves the issue
planning_llm=llm,