anybody had luck with gpt-oss and crewai?
if so which provider are you using.
For OpenRouter and Groq, everything is working smoothly (working code below). Hugging Face didn’t run out of the box, but it should just be a matter of time before it’s properly integrated.
from crewai import LLM
from typing import List, Tuple
import random
import os
os.environ["OPENROUTER_API_KEY"] = "sk-YOUR-KEY"
os.environ["GROQ_API_KEY"] = "gsk_YOUR-KEY"
LLM_CONFIGURATIONS: List[Tuple[str, str]] = [
("OpenRouter", "openrouter/openai/gpt-oss-120b"),
("Groq", "groq/openai/gpt-oss-120b")
]
CANDIDATE_QUESTIONS: List[str] = [
"Are LLMs deterministic or stochastic? What does this imply "
"for their reliability and consistency?",
"Imagine there's a small town with a very particular barber. This barber "
"has a unique rule: he shaves all the men in town who visit him. "
"Does the barber shave himself?",
"Explain in simple words the concept of synthetic a priori judgments "
"by Immanuel Kant."
]
PROMPT_TEMPLATE = (
"Think deeply and provide a clear, unequivocal answer to the following "
"question:\n\n{question}"
)
for provider_name, model_identifier in LLM_CONFIGURATIONS:
print("= " * 25)
print(f"🔹 Provider: {provider_name}")
print(f"🔹 Model: {model_identifier}")
question = random.choice(CANDIDATE_QUESTIONS)
print(f"🔹 Question: {question}")
try:
llm = LLM(
model=model_identifier,
temperature=0.7,
reasoning_effort="high"
)
response = llm.call(
PROMPT_TEMPLATE.format(question=question)
)
print(f"✅ Answer:\n{response.strip()}")
except Exception as e:
print(f"❌ Error querying {provider_name}: {e}")
print("= " * 25 + "\n")
It would have been great if your example showed a running crew with tool use. I had tried with groqcloud and tool inputs were messed up
Tried, using Ollama, and I have errors with empty response. Haven’t tried to debug the response from the Ollama API side, but I’m assuming that the frameworks are not yet adapted to the new model, maybe
Hi, I also had the similar issue, and Id like to share my experience. in my case the crewai 0.201.1 was using the litellm 1.74.9 , where the lite llm had an issue with ollama + gpt-oss20b , where it is being resolved in the Fix Ollama GPT-OSS streaming with 'thinking' field by colesmcintosh · Pull Request #13375 · BerriAI/litellm · GitHub . so I moved to qwen2.5 14b-instruct and it was working.