Hi there!
I have Llama-3.2-3B-Instruct model deployed on internal infrastracture. I want to use it as the llm for one of my agents. This what I prepared:
  llm = LLM(
      model="alpindale/Llama-3.2-3B-Instruct",
      base_url="http://server_name:8001/v1",
      api_key="NA"
  )
  agent = Agent(
            config=self.agents_config["selector"],  # type:ignore
            verbose=True,
            allow_delegation=False,
            llm=self.llm
        )
Im getting this error:
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=alpindale/Llama-3.2-3B-Instruct
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
Is it possible to use custom models at all?