I am having a hard time for being able to run custom tools with locally run models hosted in ollama. I had chosen the hf.co/Salesforce/Llama-xLAM-2-8b-fc-r-gguf:latest model which has function/tool calling capability (as is a leader in the leaderboard) , but that didn’t seem to work. The LLM return None or empty response for any task which requires it to invoke a crewAI tool.
Upon investigation I figured out xLAM model is being invoked via Ollama’s plain /api/generate endpoint, which Litellm treats as “functions-unsupported” and thus drops my function definitions before sending the prompt.
I understand this is an “issue” not in the crewAI side of things, but do we have any recommendations for running function calling LLMs locally ?
I’ve heard good things about the function calling capabilities of Qwen 2.5 models, I haven’t tried Qwen 3 model family but it should be equally good or even better. Give them a try.