litellm.BadRequestError: LLM Provider NOT provided

I’m trying to get the crew to run, but some unknown error is occurring on the part of LiteLLM which is not identifying the LLM provider even if it is passed as a parameter. I couldn’t find anyone anywhere with the same problem I’m facing, so the solution was to rollback to version 0.51 of CrewAI which, with the same code, works without errors.

The code I’m using as a reference for the test:

from crewai import Agent, Task, Crew
from langchain_openai import AzureChatOpenAI
from load_dotenv import load_dotenv
load_dotenv()
import os

llm = AzureChatOpenAI(openai_api_version=os.getenv("OPENAI_API_VERSION"),
                      model_name=os.getenv("AZURE_OPENAI_DEPLOYMENT"),
                      api_key=os.getenv("AZURE_OPENAI_API_KEY"),
                      temperature=0.3,
)

fine_tunning_agent = Agent(
    role="Your role is to create dialogues",
    goal="Ask the best possible questions and answers",
    backstory="""You're a professional interviewer""",
    verbose=False,
    allow_delegation=False,
    max_iter=2,
    llm=llm,
)

task1 = Task(
    description = """
            Based on a prompt received, create 3 different questions and answers:

            Prompt: {prompt}""",

    agent = fine_tunning_agent,
    expected_output = """
            Your 3 answers must be in English in Json format.

            Sample answer:

    {examples}"""
)

crew = Crew(
    agents = [fine_tunning_agent],
    tasks = [task1],
    verbose = True,
)

result = crew.kickoff(
    inputs={"prompt": "Hypothetical dialogues of a human and a dog talking",
            "examples": {"messages": '[{"role": "user", "content": "put your question here"}, {"role": "assistant", "content": "put the answer here"}]'}},
)

the error I’m getting is as follows:

2024-10-10 15:55:52,631 - 16308 - llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=gpt-35-turbo
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
Traceback (most recent call last):
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\agent.py", line 227, in execute_task
    result = self.agent_executor.invoke(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 92, in invoke
    formatted_answer = self._invoke_loop()
                       ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 173, in _invoke_loop
    raise e
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 113, in _invoke_loop
    answer = self.llm.call(
             ^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\llm.py", line 155, in call
    response = litellm.completion(**params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\litellm\utils.py", line 1071, in wrapper
    raise e
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\litellm\utils.py", line 959, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\litellm\main.py", line 2957, in completion
    raise exception_type(
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\litellm\main.py", line 852, in completion
    model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
                                                            ^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py", line 520, in get_llm_provider
    raise e
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py", line 497, in get_llm_provider
    raise litellm.exceptions.BadRequestError(  # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=gpt-35-turbo
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\agent.py", line 227, in execute_task
    result = self.agent_executor.invoke(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 92, in invoke
    formatted_answer = self._invoke_loop()
                       ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 173, in _invoke_loop
    raise e
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 113, in _invoke_loop
    answer = self.llm.call(
             ^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\llm.py", line 155, in call
    response = litellm.completion(**params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\litellm\utils.py", line 1071, in wrapper
    raise e
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\litellm\utils.py", line 959, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\litellm\main.py", line 2957, in completion
    raise exception_type(
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\litellm\main.py", line 852, in completion
    model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
                                                            ^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py", line 520, in get_llm_provider
    raise e
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py", line 497, in get_llm_provider
    raise litellm.exceptions.BadRequestError(  # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=gpt-35-turbo
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\eduardo\crewAI\main.py", line 66, in <module>
    result = crew.kickoff(
             ^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\crew.py", line 490, in kickoff
    result = self._run_sequential_process()
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\crew.py", line 594, in _run_sequential_process
    return self._execute_tasks(self.tasks)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\crew.py", line 692, in _execute_tasks
    task_output = task.execute_sync(
                  ^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\task.py", line 191, in execute_sync
    return self._execute_core(agent, context, tools)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\task.py", line 247, in _execute_core
    result = agent.execute_task(
             ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\agent.py", line 239, in execute_task
    result = self.execute_task(task, context, tools)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\agent.py", line 239, in execute_task
    result = self.execute_task(task, context, tools)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\agent.py", line 238, in execute_task
    raise e
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\agent.py", line 227, in execute_task
    result = self.agent_executor.invoke(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 92, in invoke
    formatted_answer = self._invoke_loop()
                       ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 173, in _invoke_loop
    raise e
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 113, in _invoke_loop
    answer = self.llm.call(
             ^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\crewai\llm.py", line 155, in call
    response = litellm.completion(**params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\litellm\utils.py", line 1071, in wrapper
    raise e
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\litellm\utils.py", line 959, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\litellm\main.py", line 2957, in completion
    raise exception_type(
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\litellm\main.py", line 852, in completion
    model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
                                                            ^^^^^^^^^^^^^^^^^
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py", line 520, in get_llm_provider
    raise e
  File "C:\Users\eduardo\crewAI\.venv\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py", line 497, in get_llm_provider
    raise litellm.exceptions.BadRequestError(  # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=gpt-35-turbo
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

Can anyone tell me if this is a bug and/or if I’m missing a setting?

You do not need langchain anymore

2 Likes

That solved it, thank you!

Hi! I have the same error using Ollama. How did you solve it?

I am using the code: CrewAI/app.py at main · mvdiogo/CrewAI (github.com)

The error with crewai.version
‘0.70.1’:

BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=Ollama
Params: {'model': 'llama2:13b', 'format': None, 'options': {'mirostat': None, 'mirostat_eta': None, 'mirostat_tau': None, 'num_ctx': None, 'num_gpu': None, 'num_thread': None, 'num_predict': None, 'repeat_last_n': None, 'repeat_penalty': None, 'temperature': None, 'stop': None, 'tfs_z': None, 'top_k': None, 'top_p': None}, 'system': None, 'template': None, 'keep_alive': None, 'raw': None}
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

Does the following code solve the issue?

# Import the LLM class
from crewai import Agent, LLM

# Use it with the agent
my_agent = Agent(
    …,
    llm=LLM(
        base_url=“http://localhost:11434/”,
        model=“ollama/llama2:13b”,
    ),
)
3 Likes

this true? i’ve been having the most difficult time with litellm → gemini

What error are you getting?

Hi Rob, posted the errors in other thread. In the end I was able to touch base with GCP Gemini team who suggested to integrate through Gemini’s OpenAI API compatibility instead of the native Gemini SDKs, which resolved the errors!

FYI two options
a. Through Vertex AI: OpenAI ライブラリを使用して Vertex AI モデルを呼び出す  |  Generative AI on Vertex AI  |  Google Cloud
b. Through Google AI Studio:** Compatibilità con OpenAI  |  Gemini API  |  Google AI for Developers

You have to use the <provider_name>/<model_name> convention in the model attribute
for eg. if you are using azure open ai then it should be

model = “azure/<your_deployment_name>”
Please refer to the docs below