LLM used through Ollama is not working for fileread tool

I am trying to analyze a text file with conversations between people kind of a movie subtitle. For this I am using Ollama with CrewAI and using the “FileReadTool” for the agent that analyzes the texts.

If I use the Llama2 model(Not through Ollama) it can read and print the contents in the log and also does the analysis properly.

When I switch the model to Ollama(with DeepSeek R1), the FileReadTool is not at all working. Am I missing something here?

It seems that reasoning models such as R1 are not ideal for tool usage at the moment, as also stated in their paper:
“General Capability: Currently, the capabilities of DeepSeek-R1 fall short of DeepSeek-V3
in tasks such as function calling, multi-turn, complex role-playing, and JSON output.”
https://arxiv.org/pdf/2501.12948

So I have had nice results using qwen, openhermes models, haven’t tried deepseek-v3 for now.

That is true; does not have tool calling, yet. You can try out this variant that has tool calling but it’s only available in 70B flavor: michaelneale/deepseek-r1-goose

1 Like