So today (06/10/2025), my Crew testing has stopped because OpenAI is experiencing issues with their APIs OpenAI Status. This got me thinking, is there a graceful way CrewAI handles this scenario other than simply assigning different LLMs to agents? Could a router be used for that (My understanding is that the router is for logic routing and I can’t think of a way to use that to route to a different LLM provider)?
Anyway, has anyone thought through something similar and/or implemented a solution? Thx.
Welcome to the community @edacee_73848 and what a great question
As the LLMs can be adjusted i the crew such as llm_reasoning = LLM(model=“o4-mini”, drop_params=True, additional_drop_params=[“stop”])
You could run a test for the LLM API and if it fails drop to Gemini etc.
agent = Agent(
role=“Analyst”,
goal=“Analyze the market trends”,
backstory=“You are a data-driven analyst skilled in drawing actionable insights.”,
llm=openai_llm
)
Crew setup with fallback logic
crew = Crew(
agents=[agent],
tasks=[
Task(
description=“Provide a high-level summary of current market trends.”,
agent=agent
)
],
llm=openai_llm,
fallback_llm=ollama_llm # Use this if the primary fails
)