Connecting Ollama with crewai

Hi,

Here is my how I connect my local llama3:7b to crew,

I would check following

  1. is http://localhost:11434 url open and you can black page saying “Ollama is running”

  2. do you see ollama icon next to notification center in top right, means ollama is running (if you are using ollama)

  3. I would check my locally installed llama models with ollama list and make sure you are running that model with command ollama run llama3 or ollama run llama3.3 and then make sure you are referencing that exact model in crewai script
    image

     from crewai import Agent, Task, Process, Crew, LLM
    
     #LLM Object from crewai package
     llm=LLM(model="ollama/llama3", base_url="http://localhost:11434")
    
     agent1 = Agent(
         role="role",
         goal="goal",
         backstory="backstory",
         verbose=True,
         allow_delegation=True,
         llm=llm #<<<<<<< add model to agent to ensure it uses it
     )
    
     task1 = Task(
             description="description",
             expected_output = "output",
             agent=agent1
     )
    
     crew = Crew(
         agents=[agent1],
         model="ollama/llama3", #<<<<< add model to crew to ensure it uses it
         tasks= [task1],
         cache=True,
         verbose=True,
         process=Process.sequential,
         planning=True, # I see better results with this
         planning_llm=llm
     )
    

I hope this helps.