Ollama stopped working since update to CrewAI 0.60.0

I updated to CrewAI 0.60.0 after watching the live stream last night. I now find that my local Ollma has stopped working with CrewAI.

  File "/home/jonathan/PycharmProjects/Development/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 507, in get_llm_provider
    raise litellm.exceptions.BadRequestError(  # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: GetLLMProvider Exception - 'Ollama' object has no attribute 'split'

original model: Ollama
Params: {'model': 'llama3.1', 'format': None, 'options': {'mirostat': None, 'mirostat_eta': None, 'mirostat_tau': None, 'num_ctx': None, 'num_gpu': None, 'num_thread': None, 'num_predict': None, 'repeat_last_n': None, 'repeat_penalty': None, 'temperature': None, 'stop': None, 'tfs_z': None, 'top_k': None, 'top_p': None}, 'system': None, 'template': None, 'keep_alive': None, 'raw': None}

Any suggestions/help appreciated.

How is the LLM configured?

import os
from langchain_community.llms import OpenAI, Ollama
from langchain_openai import ChatOpenAI
from langchain_groq import ChatGroq

@staticmethod
class LLMS:
    def __init__(self):
        self.OpenAIGPT35 = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.7)
        self.OpenAIGPT4oMini = ChatOpenAI(model_name="gpt-4o-mini", temperature=0.8)
        self.OpenAIGPT4o = ChatOpenAI(model_name="gpt-4o", temperature=0.8)
        self.OpenAIGPT4 = ChatOpenAI(model_name="gpt-4", temperature=0.8)
        # self.Phi3 = Ollama(model="phi3:mini")
        self.Llama3_1 = Ollama(model="llama3.1")
        self.Phi3 = Ollama(model="phi3:medium-128k")
        # self.Phi3 = ChatOpenAI(model_name="phi3:medium-128k", temperature=0, api_key="ollama", base_url="http://localhost:11434")
        self.groqLama3_8B_3192 = ChatGroq(temperature=0.5, groq_api_key=os.environ.get("GROQ_API_KEY"),
                                          model_name="llama3-8b-8192")

@matt
FYI: Groq has also stopped working:

  File "/home/jonathan/PycharmProjects/Development/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 479, in get_llm_provider
    raise litellm.exceptions.BadRequestError(  # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama3-8b-8192
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

Hi @matt,
FYI, llm=β€˜groq/llama3-8b-8192’ via LiteLLM works OK.

Today I learned about how CrewAI integrates LiteLLM :grin:

I still need to be able to run local ollama based models!

With ollama you can still use Litellm. You just need to change the base URL via the env variable for example

api_base=β€œhttp://localhost:11434”

1 Like

I have a similar issue, we have ollama running in a local docker container and our python container cannot reach it, feom various testing it looks like base_url is simply ignored. I out random values and it would still try to hit localhost:11434. Is this a known issue? Ive tried specifying the url by overriding the env variable to no avail!

Try using the env variable OLLAMA_API_BASE. I’m running litellm==1.53.4 , crewai==0.83.0
crewai-tools==0.14.0 , and that works for me.