Litellm issue when used with Ollama

This crew is a simple chat and response crew. when I am using ollama to run a simple small llm i keep getting this information from litellm that says nothing. The main.py is just a loop: take user input and give response
Screenshot and crew.py code given below:

from crewai import Agent, Crew, Process, Task, LLM
from crewai.project import CrewBase, agent, crew, task

from dotenv import load_dotenv

load_dotenv()

@CrewBase
class Try1():
    
	llm = LLM(
		model= 'ollama/dolphin-phi',
		base_url= 'http://localhost:11434',
		)
	"""Try1 crew"""
	agents_config = 'config/agents.yaml'
	tasks_config = 'config/tasks.yaml'

	@agent
	def chat_agent(self) -> Agent:
		return Agent(
			config=self.agents_config['chat_agent'],
			llm = self.llm
		)

# ---------------------------------------------

	@task
	def chat_task(self) -> Task:
		return Task(
			config=self.tasks_config['chat_task'],
		)

	@crew
	def crew(self) -> Crew:

		return Crew(
			agents=self.agents, # Automatically created by the @agent decorator
			tasks=self.tasks, # Automatically created by the @task decorator
			process=Process.sequential,
			memory = True,
			verbose=True,
			# process=Process.hierarchical, # In case you wanna use that instead https://docs.crewai.com/how-to/Hierarchical/
		)

The model dolphin-phi isn’t supported by LiteLLM. Take a look at the supported Ollama models here Ollama | liteLLM

1 Like

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.