"Fallback" LLM Configuration

So today (06/10/2025), my Crew testing has stopped because OpenAI is experiencing issues with their APIs OpenAI Status. This got me thinking, is there a graceful way CrewAI handles this scenario other than simply assigning different LLMs to agents? Could a router be used for that (My understanding is that the router is for logic routing and I can’t think of a way to use that to route to a different LLM provider)?

Anyway, has anyone thought through something similar and/or implemented a solution? Thx.

Welcome to the community @edacee_73848 and what a great question

As the LLMs can be adjusted i the crew such as llm_reasoning = LLM(model=“o4-mini”, drop_params=True, additional_drop_params=[“stop”])
You could run a test for the LLM API and if it fails drop to Gemini etc.

Thanks, @Tony_Wood … I was, obviously, unaware of the drop_params Litellm feature. Thanks!

Do you know what the logic looks like to switch to an alternative LLM after “stop”?

For the older models you can use llm_reasoning = LLM(model=“gpt-4.1”) as the stop is needed for o4-mini and o3

logic for stop
Sorry not written it so would need to work it out.

Thanks, @Tony_Wood. Your suggestion got me on the right track…Going to iterate through the following logic:

from crewai import Crew, LLM, Agent, Task
import os

Primary LLM (OpenAI)

openai_llm = LLM(
model=“gpt-4”,
provider=“openai”,
drop_params=True,
additional_drop_params=[“stop”]
)

Fallback LLM (Ollama, local inference)

ollama_llm = LLM(
model=“llama3”,
provider=“ollama”,
drop_params=True
)

Example agent using OpenAI first

agent = Agent(
role=“Analyst”,
goal=“Analyze the market trends”,
backstory=“You are a data-driven analyst skilled in drawing actionable insights.”,
llm=openai_llm
)

Crew setup with fallback logic

crew = Crew(
agents=[agent],
tasks=[
Task(
description=“Provide a high-level summary of current market trends.”,
agent=agent
)
],
llm=openai_llm,
fallback_llm=ollama_llm # Use this if the primary fails
)

1 Like

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.