A question regarding using open source LLMs through Ollama with CrewAI

Hi all,

I was wondering if any one has been used CrewAI with one of the open source LLMs of Ollama. Over the weekend I worked on a small project using CrewAI with Ollama. When I run it, it does not give any error, at the same time, it does not produce any results. I can share the code if any wants. Thanks in advance for any hint!

Best regards,

Aslan

Hi @aslansd,
First question: What operting system re you running: Linux, Win, Mac?

I tend to be online most days, please feel free to PM me.

I am relatively new to CrewAI, but may be able to help you with this one.

Hi @Dabnis, thanks for your response. I am using Mac!

Hi @aslansd,

I use local models often through ollama. Here’s a simple crewai repo that uses several local LLMs. Feel free to change to any models you’ve downloaded with Ollama.
Also, you can comment out gemini-1.5 as the LLM Manager and change the process back from hierarchical to sequential.

The crew will ask you for a topic when you start-up. I find it more flexible.
Check out the code if it helps.

Cheers,
Alex

I primarily use open source LLM’s with both ollama and LMstudio.

You basically install ollama. Do ollama pull (model name)

from langchain_community.chat_models import ChatOllama
from langchain_community.llms import Ollama

llm_ollama = ChatOllama(model="llama3-gradient:latest")
llm_ollama2 = ChatOllama(model="mistral:latest")
llm_ollama3 = ChatOllama(model="phi3:latest")
llm_ollama4 = ChatOllama(model="dolphin-llama3:8b-256k")
llm_ollama5 = ChatOllama(model="gemma2:latest")

etc. Then set your LLM in your crew. llm=llm_olamma, etc You may need to run the ollama serve command first but I dont need to.

    @agent
    def reporting_analyst(self) -> Agent:
        return Agent(
            config=self.agents_config['reporting_analyst'],
            verbose=True,
            allow_delegation=True,
            llm=llm_ollama,
            max_iter=800,
            memory=True,
            max_rpm=15
        )

Thanks for your comments @alexcovo @jklre!

2 Likes

You’re welcome. Just FYI I have an issue with langchain-core 0.3.0 now after I updated crewai to latest version today. I have to fix. Just a heads up!

Works with crewai 0.51.1

Thanks for info @alexcovo!

if you’re going to use the repo use with crewai 0.51.1.

Having issues with update 0.6 and langchain-core and dependencies locally.

Langchain is no longer a dependency for crewAI,

What issue are you having?

@aslansd Here is the updated code for 0.60.00. Unfortunately what I posted previously doesn’t work anymore. I’ve updated the githhub repository as well.

Here’s the updated crew.py code:

import os
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from langchain_community.llms import OpenAI, Ollama
from dotenv import load_dotenv
import litellm # Added for using google API
from crewai_tools import SerperDevTool

search_tool = SerperDevTool()

load_dotenv()

Set the Google API key for LiteLLM to use Gemini LLM Models

litellm.api_key = os.getenv(‘GOOGLE_API_KEY’)

Uncomment the following line to use an example of a custom tool

from conundrum_crew.tools.custom_tool import MyCustomTool

@CrewBase
class ConundrumCrew():
“”“Conundrum crew”“”
agents_config = ‘config/agents.yaml’
tasks_config = ‘config/tasks.yaml’

@agent
def researcher(self) -> Agent:
	return Agent(
		config=self.agents_config['researcher'],
		# tools=[MyCustomTool()], # Example of custom tool, loaded on the beginning of file
		verbose=True,
		llm='ollama/hermes3:latest',
    	max_iterations=5,
    	max_time=120,
		tools=[search_tool]
	)

@agent
def reporting_analyst(self) -> Agent:
	return Agent(
		config=self.agents_config['reporting_analyst'],
		verbose=True,
		llm='ollama/gemma2:27b',
    	max_iterations=10,
    	max_time=120,	
	)

@task
def research_task(self) -> Task:
	return Task(
		config=self.tasks_config['research_task'],
		output_file='research.md'
	)

@task
def reporting_task(self) -> Task:
	return Task(
		config=self.tasks_config['reporting_task'],
		output_file='report.md'
	)

@crew
def crew(self) -> Crew:
	"""Creates the Conundrum crew"""
	return Crew(
		agents=self.agents, # Automatically created by the @agent decorator
		tasks=self.tasks, # Automatically created by the @task decorator
		manager_llm='ollama/gemma2:27b', #you can also use Gemini, for example: 'gemini/gemini-1.5-flash-exp-0827',
		process=Process.hierarchical, # In case you wanna use that instead https://docs.crewai.com/how-to/Hierarchical/
		# process=Process.sequential,
		verbose=True,
	)

I had issues with my conda environment but sorted it out. I still wasn’t sure how to implement Gemini but currently solved with following. If there’s an easier way let me know!

Thanks @alexcovo for your comments!

1 Like