Local LLMs tools calling

Hey everyone! :waving_hand:

I’m struggling to get local LLMs to work properly with CrewAI, especially when using basic tools like Delegation and file Reader/Writer

When I use Gemini 2.0 Flash, it works fine. With Claude 3.5, it’s even better. But any local model I’ve tried is almost unusable.

I recently tried Qwen2.5-7B-Instruct-1M from Ollama, but results are inconsistent. It sometimes works, but mostly fails with CrewAI tools.

My setup:
Downloaded model from:

and executed :
ollama create crew-ai-Qwen2.5-7B-Instruct-1M -f Modelfile
with the provided template file as Modelfile

Configured CrewAI with:

llm = LLM(
model=“ollama/crew-ai-Qwen2.5-7B-Instruct-1M”,
base_url=“http://127.0.0.1:11434
)

Yet, it still doesn’t handle function calling or tool use reliably.

Has anyone successfully used Qwen2.5 (or any local model) with CrewAI tools like delegation or file operations?
Are there any specific configurations, prompts, or workarounds that help?
Would another local model work better? (Mistral, Deepseek, etc.)

Any insights would be greatly appreciated! :folded_hands: Thanks in advance for your help!

Tool calling should work with local models via ollama, but depending on the model, it’s sometimes inconsistent. I suggest you keep trying different models until you find one that works for you.