The docs for Task say
Outputs a Pydantic model object, requiring an OpenAI client. Only one output format can be set.
Does that mean that I cannot use an opensource LLM and get a Pydantic output from a crew?
The docs for Task say
Outputs a Pydantic model object, requiring an OpenAI client. Only one output format can be set.
Does that mean that I cannot use an opensource LLM and get a Pydantic output from a crew?
I am wondering the same. Although I set the Task with ‘output_pydantic=’ in a proper class I can only get a raw output. Any clue of how to use a different LLM and make it a Task to have a pydantic output?
Hello,
I have figured it out how to use different LLM and get pydantic output. Here is an example:
@task
def task2(self) -> Task:
return Task(
description=".....",
expected_output = "Expected output should be a pydantic model of Task2TaskOutput type with desc property set to list of dictionary"
llm=OpenAI35kTurbo, # I am using Azure LLM
callback=log_output, # you can skip this
output_pydantic = Task2TaskOutput, # This is my pydantic Basemodel
)
My pydantic output model looks like this:
class Task2TaskOutput(BaseModel):
desc: list[dict[str, int]]
Three things that are absolutely important are:
set llm in your Task. I know there is no attribite called llm in Task, but this works.
Mention in your expected output that your are expecting a pydantic model. Play around with the words in expected output
In agent, after you write the backstory, metion output format. For example
backstory = You are a seasoned anomaly detection specialist adept at identifying unusual patterns and behaviors in datasets.
Output format:
```"desc": list[dict[str, int]]```