LLM used through Ollama is not working for fileread tool

I am trying to analyze a text file with conversations between people kind of a movie subtitle. For this I am using Ollama with CrewAI and using the “FileReadTool” for the agent that analyzes the texts.

If I use the Llama2 model(Not through Ollama) it can read and print the contents in the log and also does the analysis properly.

When I switch the model to Ollama(with DeepSeek R1), the FileReadTool is not at all working. Am I missing something here?

It seems that reasoning models such as R1 are not ideal for tool usage at the moment, as also stated in their paper:
“General Capability: Currently, the capabilities of DeepSeek-R1 fall short of DeepSeek-V3
in tasks such as function calling, multi-turn, complex role-playing, and JSON output.”
https://arxiv.org/pdf/2501.12948

So I have had nice results using qwen, openhermes models, haven’t tried deepseek-v3 for now.

That is true; does not have tool calling, yet. You can try out this variant that has tool calling but it’s only available in 70B flavor: michaelneale/deepseek-r1-goose

1 Like

I haven’t used exactly that setup, but here are a few thoughts and things you can check:

It’s possible that with Ollama + DeepSeek R1, the FileReadTool isn’t being routed properly into the model’s tool chain. The Llama2 path might implicitly allow you to intercept file input (print logs etc.), but with Ollama + DeepSeek the tooling interface might be more restricted or require explicit registration.

Here’s what I’d try:

  1. Verify tool registration — make sure that when you launch the agent under Ollama/DeepSeek, the FileReadTool is registered in the same tool namespace that the agent actually sees. Maybe the tool isn’t in the “allowed tools” list for that model runtime.

  2. Check for sandboxing or security constraints — Ollama may sandbox or disable certain IO operations in some model configurations, especially when using reasoning models like DeepSeek-R1. DeepSeek’s runtime via Ollama might drop or ignore some file access requests.

  3. Inspect logs or debug mode — see if Ollama gives any warnings or errors about failed tool calls. Sometimes you might find that the tool call is simply dropped or returned an error that’s swallowed.

  4. Test with a minimal example — write a very simple prompt + tool call to FileReadTool (e.g. “Read this small file”) and see if it ever returns anything. That isolates whether the issue is your conversation logic or the tool interface itself.

  5. Compare versions / runtimes — making sure you’re using versions of Ollama/DeepSeek that support tool calling. For example, DeepSeek-R1 is listed in Ollama’s model library.

If you like, I can dig more deeply into why FileReadTool fails under Ollama+DeepSeek. Meanwhile, here’s a write-up I found that covers using DeepSeek locally and integrating it with LM Studio and tool setups, which might help you debug: lm studio deepseek support