Hi everyone,
I’m currently using OpenAI’s o4-mini
as the reasoning LLM for a CrewAI setup performing a fairly standard research task. The process is structured like this:
- 14 agents each research one specific section of a report (total: 14 sections).
- A 15th agent compiles the final report based on the 14 research results.
This setup has worked well until recently. Over the past few days, however, I’ve started encountering this error:
litellm.exceptions.ContentPolicyViolationError: litellm.BadRequestError: litellm.ContentPolicyViolationError: ContentPolicyViolationError: OpenAIException - Invalid prompt: your prompt was flagged as potentially violating our usage policy. Please try again with a different prompt: https://platform.openai.com/docs/guides/reasoning#advice-on-prompting
When this happens, the affected task fails entirely. Since the final report depends on all 14 inputs, the whole process breaks down.
I’m looking for help on a few key points:
-
How can I inspect the exact prompts CrewAI sends to
o4-mini
?
I’ve tried enabling verbose mode, logging to a file, and even overriding thestep_callback
in theCrew(...)
config, but haven’t been able to capture the actual prompts sent to the model. Seeing the “harmful” prompt would allow me to reword it or work around any problematic phrasing — or even override internal CrewAI instructions if needed. -
How can I retry a failed task?
I setmax_retry_limit=5
for the agents, but it doesn’t seem to help — likely because the same prompt just fails repeatedly. I also tried using the task’sguardrail
mechanism, but it only triggers if the task succeeds and returns output — which doesn’t happen in this case. -
Is there a way to allow the process to complete even if a task fails?
Obviously, the final report might be degraded if one of the research inputs is missing, but that’s still preferable to wasting the tokens on a completely failed run.
Any guidance or workarounds would be greatly appreciated. Thanks in advance!