Connecting Ollama with crewai

I am having issues using other LLMs other than the default OpenAI models with my crews. I am new to all this and have been following the instructions from the official documentation: Quickstart - CrewAI and also the LiteLLM docs to see what parameters I need to parse to the LLM instance.

My issue is that when I run the python command “python src/crewai/main.py” it doesn’t do anything, not even an error gets thrown. I have run it in verbose as well and no error messages in the logs.

I have changed to Ollama and I can interact with it manually, but when I run it as part of my crew it does the same thing as the cohere LLM.

Not sure if it’s my machine. I’m using an old MacBook Air. I am using crewai version 0.86.0 and using python version 3.11.9.

I initiate my crew by running the crewai create crew crewai_firstcrew

The resulting files, I update the crew.py file as follows (see bold text):

from crewai import Agent, Crew, Process, Task, LLM
from crewai.project import CrewBase, agent, crew, task
from dotenv import load_dotenv

load_dotenv()

@CrewBase
class CrewaiFirstcrew():
“”“CrewaiFirstcrew crew”“”

agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'

**ollama_llm = LLM(**

** model=“ollama/llama3.2:3b”,**
** api_base=“http://localhost:11434”**
** )**

@agent
def researcher(self) -> Agent:
	return Agent(
		config=self.agents_config['researcher'],
		verbose=True,
		**llm=self.ollama_llm**
	)

Hoping someone is able to give some direction as to how to get this working or point out any misconfigurations I might have.

Hi,

Here is my how I connect my local llama3:7b to crew,

I would check following

  1. is http://localhost:11434 url open and you can black page saying “Ollama is running”

  2. do you see ollama icon next to notification center in top right, means ollama is running (if you are using ollama)

  3. I would check my locally installed llama models with ollama list and make sure you are running that model with command ollama run llama3 or ollama run llama3.3 and then make sure you are referencing that exact model in crewai script
    image

     from crewai import Agent, Task, Process, Crew, LLM
    
     #LLM Object from crewai package
     llm=LLM(model="ollama/llama3", base_url="http://localhost:11434")
    
     agent1 = Agent(
         role="role",
         goal="goal",
         backstory="backstory",
         verbose=True,
         allow_delegation=True,
         llm=llm #<<<<<<< add model to agent to ensure it uses it
     )
    
     task1 = Task(
             description="description",
             expected_output = "output",
             agent=agent1
     )
    
     crew = Crew(
         agents=[agent1],
         model="ollama/llama3", #<<<<< add model to crew to ensure it uses it
         tasks= [task1],
         cache=True,
         verbose=True,
         process=Process.sequential,
         planning=True, # I see better results with this
         planning_llm=llm
     )
    

I hope this helps.

Thanks @Salman - I have done everything you suggested and it’s still the same issue. I’ll just put my whole crew.py file here in case you can spot an errors.

  • Checking llama can be accessed and used locally (see attached images)

Screenshot 2024-12-24 at 11.52.02

  • Crew.py file
    from crewai import Agent, Crew, Process, Task, LLM
    from crewai.project import CrewBase, agent, crew, task
    from dotenv import load_dotenv

    load_dotenv()

    @CrewBase
    class CrewaiFirstcrew():
    “”“CrewaiFirstcrew crew”“”
    agents_config = ‘config/agents.yaml’
    tasks_config = ‘config/tasks.yaml’

      ollama_llm = LLM(
      	model="ollama/llama3.2",
      	api_base="http://localhost:11434"
      )
    
      @agent
      def researcher(self) -> Agent:
      	return Agent(
      		config=self.agents_config['researcher'],
      		verbose=True,
      		llm=self.ollama_llm
      	)
    
      @agent
      def reporting_analyst(self) -> Agent:
      	return Agent(
      		config=self.agents_config['reporting_analyst'],
      		verbose=True,
      		llm=self.ollama_llm
      	)
    
      @task
      def research_task(self) -> Task:
      	return Task(
      		config=self.tasks_config['research_task'],
      	)
    
      @task
      def reporting_task(self) -> Task:
      	return Task(
      		config=self.tasks_config['reporting_task'],
      		output_file='report.md'
      	)
    
      @crew
      def crew(self) -> Crew:
      	"""Creates the CrewaiFirstcrew crew"""
    
      	return Crew(
      		agents=self.agents, # Automatically created by the @agent decorator
      		tasks=self.tasks, # Automatically created by the @task decorator
      		process=Process.sequential,
      		verbose=True,
      		# process=Process.hierarchical, # In case you wanna use that instead https://docs.crewai.com/how-to/Hierarchical/
      		model="ollama/llama3.2"
      	)
    

Hopefully you might be able to spot a mistake in here.

This is what happens when I run it btw:

Screenshot 2024-12-24 at 15.14.39

No output in terminal seems to be a red herring, agents are indeed able to connect to model (in my testing) but correct command is needed to run the crew.

As per documentation you need to run crewai install in project directory (to install dependencies) and then to run the crew use crewai run

Running main.py wont give you results as template needs to install dependencies before it can execute crews

I recommend taking the course (its bit old but fundamentals still apply)

1 Like

Thanks Salman - managed to resolve it by updating the project toml file and also correcting an issue in the main.py file.