Recommendations for running custom tools with local ollama models (having function calling capabilities)

I am having a hard time for being able to run custom tools with locally run models hosted in ollama. I had chosen the hf.co/Salesforce/Llama-xLAM-2-8b-fc-r-gguf:latest model which has function/tool calling capability (as is a leader in the leaderboard) , but that didn’t seem to work. The LLM return None or empty response for any task which requires it to invoke a crewAI tool.

Upon investigation I figured out xLAM model is being invoked via Ollama’s plain /api/generate endpoint, which Litellm treats as “functions-unsupported” and thus drops my function definitions before sending the prompt.

I understand this is an “issue” not in the crewAI side of things, but do we have any recommendations for running function calling LLMs locally ?

Thanks !

I’ve heard good things about the function calling capabilities of Qwen 2.5 models, I haven’t tried Qwen 3 model family but it should be equally good or even better. Give them a try.

1 Like

Indeed the qwen2.5:14b-instruct is able to invoke my custom tools.
Here is my LLM initialisation that was used by the agent -

qwen_llm = LLM(
    model="ollama_chat/qwen2.5:14b-instruct",
    base_url="http://localhost:11434",
    tools=TOOLS,
)

Where TOOLS were the set of tool definition that I defined within crewAI. Here is a sample -

TOOLS = [
    {
        "type": "function",
        "function": {
            "name": "tool_name",
            "description": "tool_description",
            "parameters": {
                "type": "object",
                "properties": {
                    "resource_id": {"type": "string", "description": "parameter_desc"}
                },
                "required": ["required_param_name"]
            }
        }
    }
]

Note the function name, description and other values above are the same as is described in crewAI tools.

Is it necessary to add the tools in the LLM definition? :thinking:

Shouldn’t CrewAI do that for you automatically?