Hello everyone,
I’ve encountered an issue where the call
method of a CrewAI LLM
, despite being a wrapper around litellm.completion
, doesn’t seem to correctly handle the response_format
parameter. The example code below demonstrates the problem:
from litellm import get_supported_openai_params, completion
from pydantic import BaseModel
from crewai import LLM
import os
PROVIDER = 'openrouter'
MODEL = 'google/gemini-2.0-flash-lite-preview-02-05:free' # I'm poor, gimme free version
# First, let's confirm that 'response_format' is supported by the chosen provider and model.
supported_params = get_supported_openai_params(
model=MODEL,
custom_llm_provider=PROVIDER
)
if 'response_format' in supported_params:
print('Yeah, we got response_format!')
else:
print('No response_format, baby!')
# Set up our tests
os.environ['OPENROUTER_API_KEY'] = 'YOUR_KEY_NOT_MINE'
class CalendarEvent(BaseModel):
name: str
date: str
participants: list[str]
# Test if litellm.completion can handle response_format correctly.
messages = [
{'role': 'system', 'content': 'Extract the event information.'},
{'role': 'user', 'content': 'Alice and Bob are going to a science fair on Friday.'},
]
litellm_response = completion(
model=f'{PROVIDER}/{MODEL}',
messages=messages,
response_format=CalendarEvent,
)
print(f'\nLiteLLM Response:\n\n{litellm_response}')
# Test if crewai.LLM.call can handle response_format correctly.
gemini_llm = LLM(
model=f'{PROVIDER}/{MODEL}',
response_format=CalendarEvent,
)
crewai_response = gemini_llm.call(
"Extract the event information:\n\n"
"Alice and Bob are going to a science fair on Friday."
)
print(f'\nCrewAI Response:\n\n{crewai_response}')
When running the code above, I get the following error:
ValueError: The model openrouter/google/gemini-2.0-flash-lite-preview-02-05:free does not support response_format for provider 'openrouter'. Please remove response_format or use a supported model.