LiteLLM Timeouts with Ollama models

Hello,

Since updating to new CrewAI version 0.102.0, I have been facing a lot of issues related to timeouts when using local models with Ollama. Does anyone face the same issue?

crewai                    0.102.0
crewai-tools           0.36.0
litellm                     1.60.2
ollama                    0.5.1
model                     mistral-nemo:latest  | mistral-small:latest  | llama3.3:latest  

Host: Macbook Pro M1 MAX 64GB

Hi there, I am facing this issue right now.
I am not passing past my planning phase. I have like a frozen connection.

this is my error log:

  File "C:\Users\RubenCasillasPacheco\Documents\GithubFungarium\Fungarium_Agents\secretaries\.venv\Lib\site-packages\crewai\tannectionError: litellm.APIConnectionError: OllamaException - litellm.Timeout: Connection timed out after 600.0 seconds.   nnectionError: litellm.APIConnectionError: OllamaException - litellm.Timeout: Connection timed out after 600.0 secondsnnectionError: litellm.APIConnectionError: OllamaException - litellm.Timeout: Connection timed out after 600.0 seconds.      error occurred while running the crew: Command '['uv', 'run', 'run_crew']' returned non-zero ennectionError: litellm.APIConnectionError: OllamaException - litellm.Timeout: Connection timed out after 600.0 seconds.   ror occurred while running the crew: Command '['uv', 'run', 'run_crew']' returned non-zero exit stnnectionError: litellm.APIConnectionError: OllamaException - litellm.Timeout: Connection timed out after 600.0 seconds.

nnectionError: litellm.APIConnectionError: OllamaException - litellm.Timeout: Connection timed out after 600.0 seconds.
nnectionError: litellm.APIConnectionError: OllamaException - litellm.Timeout: Connection timed out after 600.0 seconds.
nnectionError: litellm.APIConnectionError: OllamaException - litellm.Timeout: Connection timed out after 600.0 seconds.
An error occurred while running the crew: Command ‘[‘uv’, ‘run’, ‘run_crew’]’ returned non-zero exit status 1.

I don’t know if something messed with the performance of local models, I’m having very hard time to get local models getting the work down since the upgrade.
I have cases where using the same task with Langgraph of autogen with local models I’m able to get it down with less time.

For now, for crewAI i’m relying on Google AI Studio API free tier, since they allow for 15 RPM and 1M TPM if I’m not mistaken, you just need to be aware that your data will be used to improve their product as stated in the Free Tier.