Building crewAI agents with locally run ollama llm

I wanted to build Multi-Agent system for my organization. But my org does not allow to use external llm apis for internal use.
Therefore i wanted to use locally hosted ollama llm to build crewAI agent.
Is it possible?

Yes it is totally posible->
Run below on terminal
curl -fsSL https://ollama.com/install.sh | sh
ollama serve &
ollama pull mistral

Now use this attribute in Agent
llm=LLM(model=“ollama/mistral”, base_url=“http://127.0.0.1:11434”)

I hope this help you!

It’s possible, setup Ollama and then follow the instructions in the docs LLMs - CrewAI