Hi,
Here is my how I connect my local llama3:7b to crew,
I would check following
-
is http://localhost:11434 url open and you can black page saying “Ollama is running”
-
do you see ollama icon next to notification center in top right, means ollama is running (if you are using ollama)
-
I would check my locally installed llama models with
ollama listand make sure you are running that model with commandollama run llama3orollama run llama3.3and then make sure you are referencing that exact model in crewai script

from crewai import Agent, Task, Process, Crew, LLM #LLM Object from crewai package llm=LLM(model="ollama/llama3", base_url="http://localhost:11434") agent1 = Agent( role="role", goal="goal", backstory="backstory", verbose=True, allow_delegation=True, llm=llm #<<<<<<< add model to agent to ensure it uses it ) task1 = Task( description="description", expected_output = "output", agent=agent1 ) crew = Crew( agents=[agent1], model="ollama/llama3", #<<<<< add model to crew to ensure it uses it tasks= [task1], cache=True, verbose=True, process=Process.sequential, planning=True, # I see better results with this planning_llm=llm )
I hope this helps.