Redundant API Call When Using output_pydantic

I pass my pydantic model self.qa_output into task like this:

 task = Task(
        description=qa_task_description,
        expected_output=qa_expected_output,
        agent=self.qa_agent(),
        async_execution=False,
        output_pydantic=self.qa_output,
        tools=self.qa_tool
    )

When I check the response from the API call in OpenAI dashboard, I see it correctly produced the output as expected in json schema (no tool call for this particular case)"

{ "field_1": False, "field_2": "...", ....}

But then I see another API call that tries to parse the first result into my pydantic model.
This is the input in tool call parameter passed to OpenAI:

{
  "name": "MainPromptOutput",
  "description": "Correctly extracted `MainPromptOutput` with all the required parameters with correct types",
  "strict": false,
  "parameters": {
    "properties": {
.....
}

So my question is what logics of the decision to make another API to parse the initial result is. And can I disable this behavior?

Looking at the source code, it seems CrewAI relies on the Instructor library for structured data extraction. Instructor, in turn, makes a call (using LiteLLM) to actually handle the extraction. So my guess is the LLM is first called by CrewAI to generate the response, and then the Instructor library makes a second call to parse that response into a usable structured format.

As for modifying this behavior, you’ll probably have to dig into the CrewAI source and build a custom behavior that suits your needs.

Thanks, @maxmoura . Will take a look the source code.