Unfortunately my Python skills (and time availability) keep me from understanding exactly what it’s doing. I’m guessing that it has first-class support for the SUPPORTED_NATIVE_PROVIDERS and then maybe everything else defaults to litellm and somehow the is_litellm is being sent to the Groq provider which results in the GroqException “is_litellm is unsupported”.
I worked around the issue by monkey patching CrewAI’s LLM usage of litellm.
So after instantiating my crew but before calling kickoff() I call apply_patch() from this litellm_patch.py module -
# litellm_patch.py
import litellm
UNSUPPORTED_KEYS = ["is_litellm"] # add keys as needed and they will be stripped from
# Save the original function
_original_completion = litellm.completion
def _patched_completion(*args, **kwargs):
for key in UNSUPPORTED_KEYS:
kwargs.pop(key, None)
return _original_completion(*args, **kwargs)
def apply_patch():
"""
Monkey patch litellm.completion to remove unsupported parameters like 'is_litellm'.
Call this once at startup.
"""
litellm.completion = _patched_completion
I believe what’s going on is that CrewAI does not support Groq as a SUPPORTED_NATIVE_PROVIDER and for Groq and all other providers it falls back to litellm. Well, litellm and/or crewai injects an “is_litellm” attribute into the prompt to be sent to the LLM.
However, Groq takes issue with this and throws a GroqException saying “is_litellm is unsupported”.
So this “patch” strips out any UNSUPPORTED_KEYS from the generated litellm.completion, currently just “is_litellm”.
So then when kickoff() is called crewai sends the modified litellm.completion to Groq and Groq no longer complains about an unsupported “is_litellm”.