Persistent litellm.BadRequestError with ChatGoogleGenerativeAI (Gemini) in Colab

Hello CrewAI Community,

I'm encountering a persistent `litellm.BadRequestError: LLM Provider NOT provided` when trying to use `ChatGoogleGenerativeAI` with a Gemini model (`gemini-1.5-flash`) within my CrewAI project running in Google Colab.

This error occurs during the `strategy_crew.kickoff()` call, specifically when the first agent tries to interact with the LLM. The full error message is:

litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=models/gemini/gemini-1.5-flash Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: [redacted link]

(Note: The `models/gemini/gemini-1.5-flash` part in the error might vary based on my last attempt, sometimes showing just `gemini-1.5-flash` or `gemini/gemini-1.5-flash`).

**Environment Details:**

*   **Platform:** Google Colab
*   **Python Version:** 3.12.11
*   **Installed Library Versions:**
    *   `crewai-tools`: 0.73.1
    *   `crewai`: 0.193.2
    *   `langchain-google-genai`: 2.1.12
    *   (Other installed libraries as per the `!pip install` command)

**LLM Setup:**

I am initializing the LLM using `langchain_google_genai.ChatGoogleGenerativeAI`:

```python
from langchain_google_genai import ChatGoogleGenerativeAI
import os

GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY") # API key is set as environment variable
llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash", # Also tried "gemini/gemini-1.5-flash"
                             verbose=True,
                             temperature=0.5,
                             google_api_key=GOOGLE_API_KEY) # Explicitly passing API key

The GOOGLE_API_KEY environment variable is confirmed to be set correctly.

Steps Taken to Troubleshoot this LLM Error:

  1. Ensured GOOGLE_API_KEY is set as an environment variable and explicitly passed to ChatGoogleGenerativeAI.

  2. Tried different model name formats for the model parameter in ChatGoogleGenerativeAI, including "gemini-1.5-flash"and "gemini/gemini-1.5-flash".

  3. Restarted the Colab runtime multiple times and re-executed all setup cells sequentially.

Previous Context (if relevant):

Prior to this LLM error, I was facing persistent ImportError / ModuleNotFoundError issues with crewai_tools.BaseTool, even after trying different import paths (from crewai_tools import BaseTool, from crewai_tools.agents import BaseTool, from crewai.tools import BaseTool) and recreating tool files. While those import errors seem to have subsided, this LLM configuration error is now preventing the crew from running.

Request for Help:

Could anyone provide guidance on how to correctly configure ChatGoogleGenerativeAI with litellm within CrewAI in a Colab environment to avoid the LLM Provider NOT provided error? Is there a specific model name format or configuration step I might be missing?

Any help would be greatly appreciated!

Thank you.

from crewai import LLM
import os

os.environ["GEMINI_API_KEY"] = "<YOUR-KEY>"

llm = LLM(
    model="gemini/gemini-flash-latest",
    temperature=0.5
)

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.