A question regarding using open source LLMs through Ollama with CrewAI

I primarily use open source LLM’s with both ollama and LMstudio.

You basically install ollama. Do ollama pull (model name)

from langchain_community.chat_models import ChatOllama
from langchain_community.llms import Ollama

llm_ollama = ChatOllama(model="llama3-gradient:latest")
llm_ollama2 = ChatOllama(model="mistral:latest")
llm_ollama3 = ChatOllama(model="phi3:latest")
llm_ollama4 = ChatOllama(model="dolphin-llama3:8b-256k")
llm_ollama5 = ChatOllama(model="gemma2:latest")

etc. Then set your LLM in your crew. llm=llm_olamma, etc You may need to run the ollama serve command first but I dont need to.

    @agent
    def reporting_analyst(self) -> Agent:
        return Agent(
            config=self.agents_config['reporting_analyst'],
            verbose=True,
            allow_delegation=True,
            llm=llm_ollama,
            max_iter=800,
            memory=True,
            max_rpm=15
        )