BadRequestError: litellm.BadRequestError: LLM Provider NOT provided

I am getting the below error when trying to use litellm

code snippet :

from crewai import LLM
llm = LLM(
model="my-model,
openai_api_base=“http://localhost:4000”,
api_key=“api-key”
)

get_research = CustomResearcherTool()

researcher = Agent(
role=‘{topic} Senior Data Researcher’,
goal= ‘Uncover cutting-edge developments in {topic}’,
backstory=(
“”“You’re a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.”“”
),
tools=[get_research],
llm=llm
)

Error :
BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=my-model
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: Providers | liteLLM

@Yash The error occurs because of how you set your LLM. Using the CrewAI LLM class, which uses LiteLLM in the background, you also need to provide the LLM provider.

For example, if you use the Anthropic Claude 3.5 Sonnet LLM, you need to put anthropic/ before setting the LLM.

from crewai import Agent, LLM

my_llm = LLM(
    api_key=os.getenv("ANTHROPIC_API_KEY"),
    model="anthropic/claude-3-5-sonnet-20240620",
)

my_agent = Agent(
    ...,
    llm=my_llm,
)

@rokbenko like if i am using sagemaker then i guess i might give the sagemaker endpoint for my model.

and when i am doing that, i am getting

AttributeError: ‘Response’ object has no attribute ‘get’

@Yash Take a look at the official LiteLLM docs on the LLM providers. The part of the docs you’re interested in is the AWS Sagemaker.

@rokbenko just need to ask can’t we use litellm proxy server with crewai

This topic was automatically closed after 30 days. New replies are no longer allowed.