Ensuring Controlled Output and Language in CrewAI

Dear CrewAI Community and Team,

I need assistance in understanding whether CrewAI currently offers features to validate and control the output of a Crew. When integrating a Crew into real business processes, maintaining control over its output—both in format and language—is crucial. However, I have yet to find a reliable solution.

Issue:

I require the Crew’s output in a specific JSON format and language. My setup includes:

• A hierarchical process managed by an orchestrator agent

• Agents utilizing output_pydantic for structured responses

Challenges Encountered:

• When using output_pydantic, the agent’s output is often rewritten in an unpredictable manner—summarized, altered, or even translated into an unintended language. (For example, I unexpectedly received a response in Japanese without any related prompt.)

• It’s unclear which component determines the final Crew output —is it the orchestrator agent or the last triggered agent ?

Request for Assistance:

• Are there any CrewAI features that allow better control over output consistency and language?

• How can I ensure that responses adhere strictly to the required format without unexpected modifications?

• Who is ultimately responsible for the final output in a Crew-managed process?

I would greatly appreciate any guidance or best practices.

Thank you!

The output of the last task to be executed in your crew becomes the final output of the crew. To get consistent results you need the structure of the output set in that task.

Since you want output in json you should set output_json to the pydantic model you want the json shaped as. You can see more in the documentation Tasks - CrewAI

shameless plug, I wrote a post on CrewAI task outputs which might help you Notes on CrewAI task structured outputs

1 Like

Thanks for article.

I’m trying to use converter_cls and rewrite the converter classes, as they are pretty bad prompted. Hope the team will add some improvements to it.

1 Like