I am working on a project using CrewAI, and I’m trying to access the o3-mini model using the OpenAI API. I’ve set OPENAI_MODEL_NAME=o3-mini in my .env file, but I keep getting the following error:
OpenAIException - Error code: 404 - {'error': {'message': 'The model `o3-mini` does not exist or you do not have access to it.'}}
However, when I switch to gpt-4 or gpt-3.5-turbo, everything works perfectly.
return Agent(
config=self.agents_config['researcher'],
verbose=True,
allow_delegation=False, #This will drive therest of the agents
max_iter=20,
llm=llm_mini,
)
Thanks for working on this and update:
I did run
crewai -update
Replaced model name OPENAI_MODEL_NAME=o3-mini )inctead of [gpt-4o] which works works well.
Restarted the Terminal but still see the error:
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True’.
ERROR:root:LiteLLM call failed: litellm.NotFoundError: OpenAIException - Error code: 404 - {‘error’: {‘message’: ‘The model o3-mini does not exist or you do not have access to it.’, ‘type’: ‘invalid_request_error’, ‘param’: None, ‘code’: ‘model_not_found’}}
Hey @Tony_Wood, so PR #2738 actually got closed once that native LiteLLM solution we were talking about came up. I’m just waiting on this test with the o3 model to move along, so we can see for sure if that additional_drop_params=["stop"] tweak is actually going to be enough.
Hi @Max_Moura I just tested with the .119 release and still a problem.. Do you get the same?
litellm.exceptions.BadRequestError: litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "Unsupported parameter: 'stop' is not supported with this model.", 'type': 'invalid_request_error', 'param': 'stop', 'code': 'unsupported_parameter'}}
An error occurred while running the flow: Command '['uv', 'run', 'kickoff']' returned non-zero exit status 1.