File "/home/jonathan/PycharmProjects/Development/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 479, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama3-8b-8192
Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
I have a similar issue, we have ollama running in a local docker container and our python container cannot reach it, feom various testing it looks like base_url is simply ignored. I out random values and it would still try to hit localhost:11434. Is this a known issue? Ive tried specifying the url by overriding the env variable to no avail!