Hi everyone,
I’m encountering an issue when running my CrewAI project and would appreciate any insights or suggestions.
Context:
I’m working on a collaborative crew setup with three agents (writer, researcher, reviewer) defined in my agents.yaml
and a collaborative task in tasks.yaml
. The agents use different LLMs (some via together_ai, some via openrouter).
tasks.yaml
collaborative_task:
description: >
Create a marketing strategy for a new AI product.
Content Writer: Focus on messaging and content strategy
Market Research Specialist: Provide market analysis and competitor insights
Marketing Strategy Reviewer: Review and refine the overall strategy for clarity, effectiveness, and alignment with business goals
Work together to create a comprehensive strategy.
expected_output: >
Complete marketing strategy with research backing
agent: writer
agents.yaml
writer:
role: >
Content Writer
goal: >
Create engaging, well-structured content for marketing strategies.
backstory: >
You are a skilled content writer who excels at transforming research and ideas into compelling, readable marketing strategies for innovative products.
llm: together_ai/meta-llama/Llama-3.3-70B-Instruct-Turbo-Free
temperature: 0.7
allow_delegation: true
researcher:
role: >
Market Research Specialist
goal: >
Provide accurate market analysis and competitor insights for new AI products.
backstory: >
You are a meticulous researcher with expertise in analyzing markets and competitors, delivering reliable and up-to-date information to support strategic decisions.
llm: openrouter/deepseek/deepseek-r1-0528-qwen3-8b:free
temperature: 0.7
allow_delegation: false
reviewer:
role: >
Marketing Strategy Reviewer
goal: >
Review and improve the marketing strategy for clarity, coherence, and effectiveness.
backstory: >
You are an experienced marketing strategist, skilled at reviewing and refining marketing documents to ensure high quality and impact.
llm: openrouter/deepseek/deepseek-r1-0528-qwen3-8b:free
temperature: 0.5
allow_delegation: false
crew.py
from crewai import Agent, Task, Crew
from crewai import Agent, Task, Crew
from crewai.project import CrewBase, agent, task, crew
from dotenv import load_dotenv
@CrewBase
class CollaborativeCrew():
def __init__(self):
load_dotenv("../.env")
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config["researcher"]
)
@agent
def writer(self) -> Agent:
return Agent(
config=self.agents_config["writer"],
)
@agent
def reviewer(self) -> Agent:
return Agent(
config=self.agents_config["reviewer"],
)
@task
def collaborative_task (self) -> Task:
return Task(
config=self.tasks_config["collaborative_task"]
)
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
verbose=True
)
def run(self):
try:
print(self.crew().kickoff())
except Exception as e:
raise Exception(f"Errore durante l'esecuzione della crew: {e}")
The problem:
When I execute the code, I get the following error:
🚀 Crew: crew
└── 📋 Task: 066146a9-20a2-481d-b25f-bbb55c68e17c
Status: Executing Task...
├── 🔧 Used Delegate work to coworker (1)
├── 🔧 Used Delegate work to coworker (2)
└── ❌ LLM Failed
╭───────────────────────────────────────────────────────────────────────────────── LLM Error ─────────────────────────────────────────────────────────────────────────────────╮
│ │
│ ❌ LLM Call Failed │
│ Error: 'NoneType' object has no attribute 'choices' │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Error during LLM call: 'NoneType' object has no attribute 'choices'
An unknown error occurred. Please check the details below.
Error details: 'NoneType' object has no attribute 'choices'
Questions:
- What could cause the LLM call to return
None
(leading to the'NoneType' object has no attribute 'choices'
error)? - Is this related to the LLM provider, model configuration, or something in my YAML/task setup?
- Are there any recommended debugging steps for this kind of error in CrewAI?