Failed to convert the output into pydantic model when run the flow using CLI command "crewai flow kickoff"

Provider List: https://docs.litellm.ai/docs/providers


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


Provider List: https://docs.litellm.ai/docs/providers


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


Provider List: https://docs.litellm.ai/docs/providers


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


Provider List: https://docs.litellm.ai/docs/providers


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


Provider List: https://docs.litellm.ai/docs/providers


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


Provider List: https://docs.litellm.ai/docs/providers


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


Provider List: https://docs.litellm.ai/docs/providers


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

I am working on crewai flows. When I kickoff the flow directly usin main function, it works properly. But When I use the CLI “crewai flow kickoff” its throwing the error, failed to convert the text into pydantic model.

Provider List: https://docs.litellm.ai/docs/providers

 Failed to convert text into a pydantic model due to the following error: litellm.BadRequestError: watsonxException - {"errors":[{"code":"invalid_input_argument","message":"Invalid input argument for Model 'mistralai/mistral-large': Only user and assistant roles are supported, with the exception of an initial optional system message!","more_info":"https://cloud.ibm.com/apidocs/watsonx-ai"}],"trace":"f8e509fe43949ee942b718d8ab2addbe","status_code":400} Using raw output instead.
# Agent: 'Conditional Matrix Analyst' -- specializing in Java code analysis and path exploration. Responsible for breaking down complex methods into comprehensive decision trees and mapping all possible execution paths.
## Task: Input: 'conditional_statement_block' from methodsClassifierTask
Analyze the provided Java methods and extract all conditional statements.  Generate a logical matrix that represents the conditions and potential paths  the code may take based on different inputs. Your final output should map each path and condition with clarity.  If the conditions have a `||` operator then re-write as `(OR)` and meant it as the one condition.
Note: Validated and corrected format ofconditional_matrix is required. The Generated structure should render the table view format in markdown report file.

Please provide link to your code to help us troubleshoot for you.

it a production code. Sry, I cant to share the code.

So is this on CrewAI Enterprise?

No. its not in crewai enterprise

@Paarttipaabhalaji Unfortunately, without seeing the code, it’s just guessing, which will only get you so far.

ok @rokbenko . Its fine.

I saw this error before. Are you hosting a local model or using a provider?

I am using the watsonx provider.

watsonx → mistral large model