### Description
I'm using Colab to test crewAI and after upgrading to 0.60.0 us…ing the following code:
```
!pip install 'crewai[tools]'==0.60.0
```
The following code does not work anymore:
```
import os
from crewai import Agent, Task, Crew, Process
# from crewai_tools import SerperDevTool
os.environ["OPENAI_API_KEY"] = "xxxxxxxxxxxxxxxxx"
os.environ["OPENAI_API_BASE"] = 'https://api.sambanova.ai/v1/'
os.environ["OPENAI_MODEL_NAME"] = 'Meta-Llama-3.1-8B-Instruct'
researcher = Agent(
role='Senior Research Analyst',
goal='Uncover cutting-edge developments in AI and data science',
backstory="""You work at a leading tech think tank.
Your expertise lies in identifying emerging trends.
You have a knack for dissecting complex data and presenting actionable insights.""",
verbose=True,
allow_delegation=False,
)
writer = Agent(
role='Tech Content Strategist',
goal='Craft compelling content on tech advancements',
backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles.
You transform complex concepts into compelling narratives.""",
verbose=True,
allow_delegation=True
)
task1 = Task(
description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
Identify key trends, breakthrough technologies, and potential industry impacts.""",
expected_output="Full analysis report in bullet points",
agent=researcher
)
task2 = Task(
description="""Using the insights provided, develop an engaging blog
post that highlights the most significant AI advancements.
Your post should be informative yet accessible, catering to a tech-savvy audience.
Make it sound cool, avoid complex words so it doesn't sound like AI.""",
expected_output="Full blog post of at least 4 paragraphs",
agent=writer
)
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
verbose=True,
process = Process.sequential
)
result = crew.kickoff()
print("######################")
print(result)
```
The code was working perfectly fine before upgrading to this version (tonight).
### Steps to Reproduce
N/A
### Expected behavior
N/A
### Screenshots/Code snippets
```
BadRequestError: litellm.BadRequestError: OpenAIException - Unknown model: gpt-4o
```
### Operating System
Other (specify in additional context)
### Python Version
3.10
### crewAI Version
0.60.0
### crewAI Tools Version
0.60.0
### Virtual Environment
Venv
### Evidence
I'm encountering the following error:
```
# Agent: Senior Research Analyst
## Task: Conduct a comprehensive analysis of the latest advancements in AI in 2024.
Identify key trends, breakthrough technologies, and potential industry impacts.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
# Agent: Senior Research Analyst
## Task: Conduct a comprehensive analysis of the latest advancements in AI in 2024.
Identify key trends, breakthrough technologies, and potential industry impacts.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
# Agent: Senior Research Analyst
## Task: Conduct a comprehensive analysis of the latest advancements in AI in 2024.
Identify key trends, breakthrough technologies, and potential industry impacts.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
---------------------------------------------------------------------------
BadRequestError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/litellm/llms/OpenAI/openai.py](https://localhost:8080/#) in completion(self, model_response, timeout, optional_params, logging_obj, model, messages, print_verbose, api_key, api_base, acompletion, litellm_params, logger_fn, headers, custom_prompt_dict, client, organization, custom_llm_provider, drop_params)
906 else:
--> 907 raise e
908 except OpenAIError as e:
43 frames
[/usr/local/lib/python3.10/dist-packages/litellm/llms/OpenAI/openai.py](https://localhost:8080/#) in completion(self, model_response, timeout, optional_params, logging_obj, model, messages, print_verbose, api_key, api_base, acompletion, litellm_params, logger_fn, headers, custom_prompt_dict, client, organization, custom_llm_provider, drop_params)
824 headers, response = (
--> 825 self.make_sync_openai_chat_completion_request(
826 openai_client=openai_client,
[/usr/local/lib/python3.10/dist-packages/litellm/llms/OpenAI/openai.py](https://localhost:8080/#) in make_sync_openai_chat_completion_request(self, openai_client, data, timeout)
682 except Exception as e:
--> 683 raise e
684
[/usr/local/lib/python3.10/dist-packages/litellm/llms/OpenAI/openai.py](https://localhost:8080/#) in make_sync_openai_chat_completion_request(self, openai_client, data, timeout)
671 try:
--> 672 raw_response = openai_client.chat.completions.with_raw_response.create(
673 **data, timeout=timeout
[/usr/local/lib/python3.10/dist-packages/openai/_legacy_response.py](https://localhost:8080/#) in wrapped(*args, **kwargs)
349
--> 350 return cast(LegacyAPIResponse[R], func(*args, **kwargs))
351
[/usr/local/lib/python3.10/dist-packages/openai/_utils/_utils.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
273 raise TypeError(msg)
--> 274 return func(*args, **kwargs)
275
[/usr/local/lib/python3.10/dist-packages/openai/resources/chat/completions.py](https://localhost:8080/#) in create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_completion_tokens, max_tokens, n, parallel_tool_calls, presence_penalty, response_format, seed, service_tier, stop, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
703 validate_response_format(response_format)
--> 704 return self._post(
705 "/chat/completions",
[/usr/local/lib/python3.10/dist-packages/openai/_base_client.py](https://localhost:8080/#) in post(self, path, cast_to, body, options, files, stream, stream_cls)
1259 )
-> 1260 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
1261
[/usr/local/lib/python3.10/dist-packages/openai/_base_client.py](https://localhost:8080/#) in request(self, cast_to, options, remaining_retries, stream, stream_cls)
936 ) -> ResponseT | _StreamT:
--> 937 return self._request(
938 cast_to=cast_to,
[/usr/local/lib/python3.10/dist-packages/openai/_base_client.py](https://localhost:8080/#) in _request(self, cast_to, options, remaining_retries, stream, stream_cls)
1040 log.debug("Re-raising status error")
-> 1041 raise self._make_status_error_from_response(err.response) from None
1042
BadRequestError: Unknown model: gpt-4o
During handling of the above exception, another exception occurred:
OpenAIError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/litellm/main.py](https://localhost:8080/#) in completion(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_completion_tokens, max_tokens, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, parallel_tool_calls, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
1395 )
-> 1396 raise e
1397
[/usr/local/lib/python3.10/dist-packages/litellm/main.py](https://localhost:8080/#) in completion(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_completion_tokens, max_tokens, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, parallel_tool_calls, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
1368 try:
-> 1369 response = openai_chat_completions.completion(
1370 model=model,
[/usr/local/lib/python3.10/dist-packages/litellm/llms/OpenAI/openai.py](https://localhost:8080/#) in completion(self, model_response, timeout, optional_params, logging_obj, model, messages, print_verbose, api_key, api_base, acompletion, litellm_params, logger_fn, headers, custom_prompt_dict, client, organization, custom_llm_provider, drop_params)
913 error_text = getattr(e, "text", str(e))
--> 914 raise OpenAIError(
915 status_code=status_code, message=error_text, headers=error_headers
OpenAIError: Unknown model: gpt-4o
During handling of the above exception, another exception occurred:
BadRequestError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/litellm/utils.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
979 # MODEL CALL
--> 980 result = original_function(*args, **kwargs)
981 end_time = datetime.datetime.now()
[/usr/local/lib/python3.10/dist-packages/litellm/main.py](https://localhost:8080/#) in completion(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_completion_tokens, max_tokens, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, parallel_tool_calls, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
2823 ## Map to OpenAI Exception
-> 2824 raise exception_type(
2825 model=model,
[/usr/local/lib/python3.10/dist-packages/litellm/utils.py](https://localhost:8080/#) in exception_type(model, original_exception, custom_llm_provider, completion_kwargs, extra_kwargs)
8195 setattr(e, "litellm_response_headers", litellm_response_headers)
-> 8196 raise e
8197 else:
[/usr/local/lib/python3.10/dist-packages/litellm/utils.py](https://localhost:8080/#) in exception_type(model, original_exception, custom_llm_provider, completion_kwargs, extra_kwargs)
6521 exception_mapping_worked = True
-> 6522 raise BadRequestError(
6523 message=f"{exception_provider} - {message}",
BadRequestError: litellm.BadRequestError: OpenAIException - Unknown model: gpt-4o
During handling of the above exception, another exception occurred:
BadRequestError Traceback (most recent call last)
[<ipython-input-2-cebbbee08ec0>](https://localhost:8080/#) in <cell line: 76>()
74
75 # Get your crew to work!
---> 76 result = crew.kickoff()
77
78 print("######################")
[/usr/local/lib/python3.10/dist-packages/crewai/crew.py](https://localhost:8080/#) in kickoff(self, inputs)
464
465 if self.process == Process.sequential:
--> 466 result = self._run_sequential_process()
467 elif self.process == Process.hierarchical:
468 result = self._run_hierarchical_process()
[/usr/local/lib/python3.10/dist-packages/crewai/crew.py](https://localhost:8080/#) in _run_sequential_process(self)
572 def _run_sequential_process(self) -> CrewOutput:
573 """Executes tasks sequentially and returns the final output."""
--> 574 return self._execute_tasks(self.tasks)
575
576 def _run_hierarchical_process(self) -> CrewOutput:
[/usr/local/lib/python3.10/dist-packages/crewai/crew.py](https://localhost:8080/#) in _execute_tasks(self, tasks, start_index, was_replayed)
669
670 context = self._get_context(task, task_outputs)
--> 671 task_output = task.execute_sync(
672 agent=agent_to_use,
673 context=context,
[/usr/local/lib/python3.10/dist-packages/crewai/task.py](https://localhost:8080/#) in execute_sync(self, agent, context, tools)
189 ) -> TaskOutput:
190 """Execute the task synchronously."""
--> 191 return self._execute_core(agent, context, tools)
192
193 @property
[/usr/local/lib/python3.10/dist-packages/crewai/task.py](https://localhost:8080/#) in _execute_core(self, agent, context, tools)
245 self.processed_by_agents.add(agent.role)
246
--> 247 result = agent.execute_task(
248 task=self,
249 context=context,
[/usr/local/lib/python3.10/dist-packages/crewai/agent.py](https://localhost:8080/#) in execute_task(self, task, context, tools)
192 if self._times_executed > self.max_retry_limit:
193 raise e
--> 194 result = self.execute_task(task, context, tools)
195
196 if self.max_rpm and self._rpm_controller:
[/usr/local/lib/python3.10/dist-packages/crewai/agent.py](https://localhost:8080/#) in execute_task(self, task, context, tools)
192 if self._times_executed > self.max_retry_limit:
193 raise e
--> 194 result = self.execute_task(task, context, tools)
195
196 if self.max_rpm and self._rpm_controller:
[/usr/local/lib/python3.10/dist-packages/crewai/agent.py](https://localhost:8080/#) in execute_task(self, task, context, tools)
191 self._times_executed += 1
192 if self._times_executed > self.max_retry_limit:
--> 193 raise e
194 result = self.execute_task(task, context, tools)
195
[/usr/local/lib/python3.10/dist-packages/crewai/agent.py](https://localhost:8080/#) in execute_task(self, task, context, tools)
180
181 try:
--> 182 result = self.agent_executor.invoke(
183 {
184 "input": task_prompt,
[/usr/local/lib/python3.10/dist-packages/crewai/agents/crew_agent_executor.py](https://localhost:8080/#) in invoke(self, inputs)
87
88 self.ask_for_human_input = bool(inputs.get("ask_for_human_input", False))
---> 89 formatted_answer = self._invoke_loop()
90
91 if self.ask_for_human_input:
[/usr/local/lib/python3.10/dist-packages/crewai/agents/crew_agent_executor.py](https://localhost:8080/#) in _invoke_loop(self, formatted_answer)
160 return self._invoke_loop(formatted_answer)
161 else:
--> 162 raise e
163
164 self._show_logs(formatted_answer)
[/usr/local/lib/python3.10/dist-packages/crewai/agents/crew_agent_executor.py](https://localhost:8080/#) in _invoke_loop(self, formatted_answer)
109 stop=self.stop if self.use_stop_words else None,
110 callbacks=self.callbacks,
--> 111 ).call(self.messages)
112
113 if not self.use_stop_words:
[/usr/local/lib/python3.10/dist-packages/crewai/llm.py](https://localhost:8080/#) in call(self, messages)
11
12 def call(self, messages: List[Dict[str, str]]) -> Dict[str, Any]:
---> 13 response = completion(
14 stop=self.stop, model=self.model, messages=messages, num_retries=5
15 )
[/usr/local/lib/python3.10/dist-packages/litellm/utils.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
1065 ):
1066 kwargs["num_retries"] = num_retries
-> 1067 return litellm.completion_with_retries(*args, **kwargs)
1068 elif (
1069 isinstance(e, litellm.exceptions.ContextWindowExceededError)
[/usr/local/lib/python3.10/dist-packages/litellm/main.py](https://localhost:8080/#) in completion_with_retries(*args, **kwargs)
2855 reraise=True,
2856 )
-> 2857 return retryer(original_function, *args, **kwargs)
2858
2859
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in __call__(self, fn, *args, **kwargs)
473 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
474 while True:
--> 475 do = self.iter(retry_state=retry_state)
476 if isinstance(do, DoAttempt):
477 try:
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in iter(self, retry_state)
374 result = None
375 for action in self.iter_state.actions:
--> 376 result = action(retry_state)
377 return result
378
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in exc_check(rs)
416 retry_exc = self.retry_error_cls(fut)
417 if self.reraise:
--> 418 raise retry_exc.reraise()
419 raise retry_exc from fut.exception()
420
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in reraise(self)
183 def reraise(self) -> t.NoReturn:
184 if self.last_attempt.failed:
--> 185 raise self.last_attempt.result()
186 raise self
187
[/usr/lib/python3.10/concurrent/futures/_base.py](https://localhost:8080/#) in result(self, timeout)
449 raise CancelledError()
450 elif self._state == FINISHED:
--> 451 return self.__get_result()
452
453 self._condition.wait(timeout)
[/usr/lib/python3.10/concurrent/futures/_base.py](https://localhost:8080/#) in __get_result(self)
401 if self._exception:
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in __call__(self, fn, *args, **kwargs)
476 if isinstance(do, DoAttempt):
477 try:
--> 478 result = fn(*args, **kwargs)
479 except BaseException: # noqa: B902
480 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
[/usr/local/lib/python3.10/dist-packages/litellm/utils.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
1090 ): # make it easy to get to the debugger logs if you've initialized it
1091 e.message += f"\n Check the log in your dashboard - {liteDebuggerClient.dashboard_url}"
-> 1092 raise e
1093
1094 @wraps(original_function)
[/usr/local/lib/python3.10/dist-packages/litellm/utils.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
978 print_verbose(f"Error while checking max token limit: {str(e)}")
979 # MODEL CALL
--> 980 result = original_function(*args, **kwargs)
981 end_time = datetime.datetime.now()
982 if "stream" in kwargs and kwargs["stream"] is True:
[/usr/local/lib/python3.10/dist-packages/litellm/main.py](https://localhost:8080/#) in completion(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_completion_tokens, max_tokens, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, parallel_tool_calls, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
2822 except Exception as e:
2823 ## Map to OpenAI Exception
-> 2824 raise exception_type(
2825 model=model,
2826 custom_llm_provider=custom_llm_provider,
[/usr/local/lib/python3.10/dist-packages/litellm/utils.py](https://localhost:8080/#) in exception_type(model, original_exception, custom_llm_provider, completion_kwargs, extra_kwargs)
8194 if exception_mapping_worked:
8195 setattr(e, "litellm_response_headers", litellm_response_headers)
-> 8196 raise e
8197 else:
8198 for error_type in litellm.LITELLM_EXCEPTION_TYPES:
[/usr/local/lib/python3.10/dist-packages/litellm/utils.py](https://localhost:8080/#) in exception_type(model, original_exception, custom_llm_provider, completion_kwargs, extra_kwargs)
6520 if original_exception.status_code == 400:
6521 exception_mapping_worked = True
-> 6522 raise BadRequestError(
6523 message=f"{exception_provider} - {message}",
6524 llm_provider=custom_llm_provider,
BadRequestError: litellm.BadRequestError: OpenAIException - Unknown model: gpt-4o
```
### Possible Solution
Downgrade to the previous version.
### Additional context
N/A