Problem with using locally deployed custom llm

Hi there!
I have Llama-3.2-3B-Instruct model deployed on internal infrastracture. I want to use it as the llm for one of my agents. This what I prepared:

  llm = LLM(
      model="alpindale/Llama-3.2-3B-Instruct",
      base_url="http://server_name:8001/v1",
      api_key="NA"
  )

  agent = Agent(
            config=self.agents_config["selector"],  # type:ignore
            verbose=True,
            allow_delegation=False,
            llm=self.llm
        )

Im getting this error:

litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=alpindale/Llama-3.2-3B-Instruct
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

Is it possible to use custom models at all?

Hey, hope u are going well!

Im using a local model just like you, but Im using deepseek.

I advise you to try use another model name, but some who use the same endpoints and structure in the calls.

One example, Im using deepseek (but you can check in documentation they dont have a specific provider relationated to deepseek or something like that) who use the same endpoints and structure who open ai

llm = LLM(
	model='gpt-4', #but in reality, is deep-seek
	temperature=0.7,
	timeout=120,
	max_tokens=10000,
	top_p=0.9,
	frequency_penalty=0.1,
	presence_penalty=0.1,
	seed=42,
	base_url="https://server_name/v1",
	api_key="dummy"
)

so, when Im instancing my LLM I can pass the model just like “gpt-4” and use a dummy api key

Try pass in the model another name who can be recognized, like “ollama/llama3:70b”

Hope this help u in some way