I configured the default project when running crewai create and selected the default model as llama3.1. I’m trying to change it within CrewBase, but it doesn’t seem to take effect. I keep getting the following error:
Exception: An error occurred while running the crew: litellm.APIConnectionError: OllamaException - {“error”:“model ‘llama3.1’ not found”}
In my code, I have it set up like this:
@CrewBase
class CrewaiBase():
agents: List[BaseAgent]
tasks: List[Task]
@agent
def image_to_base64_agent(self) -> Agent:
return Agent(
config=self.agents_config['image_to_base64_agent'],
tools=[ImageToBase64Tool()],
model=LLM(model="ollama/gpt-oss:latest", base_url="http://localhost:11434"),
verbose=True
)
It seems like the model setting inside the CrewBase class is being ignored. How can I make the project use llama3.1 as the default model?