CrewAI LLM with Groq resulting in GroqException "is_litellm is unsupported"

I’m using CrewAI 1.3.0 and getting a GroqException with “is_litellm is unsupported”.

My LLM configuration:

        self.llm = LLM(
            model="groq/openai/gpt-oss-120b",
            temperature=0.9,
            top_p=0.8,
            reasoning_effort="medium",
            stream=False,
            stop=None,
            api_key=api_key,
        )

Anyone know why this is the case and/or how to overcome it?

Hmm, looking through the CrewAI code (crewai/llm.py) it looks to me like CrewAI does not recognize Groq as a provider based on line 287:

SUPPORTED_NATIVE_PROVIDERS: Final[list[str]] = [
    "openai",
    "anthropic",
    "claude",
    "azure",
    "azure_openai",
    "google",
    "gemini",
    "bedrock",
    "aws",
]

and

crewai/llms/providers $ ls -F1
__init__.py
anthropic/
azure/
bedrock/
gemini/
openai/
utils/

Unfortunately my Python skills (and time availability) keep me from understanding exactly what it’s doing. I’m guessing that it has first-class support for the SUPPORTED_NATIVE_PROVIDERS and then maybe everything else defaults to litellm and somehow the is_litellm is being sent to the Groq provider which results in the GroqException “is_litellm is unsupported”.

I worked around the issue by monkey patching CrewAI’s LLM usage of litellm.

So after instantiating my crew but before calling kickoff() I call apply_patch() from this litellm_patch.py module -

# litellm_patch.py

import litellm

UNSUPPORTED_KEYS = ["is_litellm"]  # add keys as needed and they will be stripped from

# Save the original function
_original_completion = litellm.completion

def _patched_completion(*args, **kwargs):
    for key in UNSUPPORTED_KEYS:
        kwargs.pop(key, None)
    return _original_completion(*args, **kwargs)

def apply_patch():
    """
    Monkey patch litellm.completion to remove unsupported parameters like 'is_litellm'.
    Call this once at startup.
    """
    litellm.completion = _patched_completion

I believe what’s going on is that CrewAI does not support Groq as a SUPPORTED_NATIVE_PROVIDER and for Groq and all other providers it falls back to litellm. Well, litellm and/or crewai injects an “is_litellm” attribute into the prompt to be sent to the LLM.

However, Groq takes issue with this and throws a GroqException saying “is_litellm is unsupported”.

So this “patch” strips out any UNSUPPORTED_KEYS from the generated litellm.completion, currently just “is_litellm”.

So then when kickoff() is called crewai sends the modified litellm.completion to Groq and Groq no longer complains about an unsupported “is_litellm”.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.