Mishandling model configuration internally

I’ve been trying to use a large context open ai model e.g. o3 but crewai seems to be mishandling the model. here’s the code:

llm = LLM(
model=“openai/o3-mini”,
temperature=0.5,
max_completion_tokens=32768

and the error…
Error: litellm.exceptions.BadRequestError: litellm.BadRequestError: OpenAIException - Error code: 400 - {‘error’: {‘message’: “Unsupported parameter: ‘max_tokens’ is not supported with this model. Use ‘max_completion_tokens’ instead.”, ‘type’: ‘invalid_request_error’, ‘param’: ‘max_tokens’, ‘code’: ‘unsupported_parameter’}}

I’ve added max_completion_tokens and cleared the cache and tried funning again 4x…

terminal output:
15:22:51 - LiteLLM:INFO: utils.py:2825 -
LiteLLM completion() model= gpt-4o-mini; provider = openai
2025-02-24 15:22:51,536 - LiteLLM - INFO -
LiteLLM completion() model= gpt-4o-mini; provider = openai

right after that output another…

LiteLLM completion() model= o3-mini; provider = openai
2025-02-24 15:23:20,802 - LiteLLM - INFO -
LiteLLM completion() model= o3-mini; provider = openai

huh?