Fixing timeouts by Bypassing LiteLLM

Hello,

I’m having so much session timeouts using LiteLLM I would like to bypass it and use direct call using langchain_ollama, Assigning it to the agent still rises an error in LiteLLM.

Do anyone, knows what can I do to bypass it?

    2025-03-31 10:55:06,360 - __main__ - ERROR - Error in crew execution: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=qwen2.5:32b
     Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
     in send
        raise ReadTimeout(e, request=request)
    requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='telemetry.crewai.com', port=4319): Read timed out. (read timeout=30)