CrewAi on Mistral API Error

Hello,

I am trying to use Mistral as LLM in a crew. The code is working fine using Claude but it gets the below errro when I switch to Mistral. I see in Mistral API that there are output tokens from the API but I get the error below.

33780 - llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.BadRequestError: MistralException - Error code: 400 - {‘object’: ‘error’, ‘message’: ‘Expected last role User or Tool (or Assistant with prefix True) for serving but got assistant’, ‘type’: ‘invalid_request_error’, ‘param’: None, ‘code’: None}

The code is below:

llm_mistral = LLM(api_key=mistral_api_key, model=“mistral/mistral-large-2407”)

@CrewBase
class Reports():
“”“Reports crew”“”

agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'

@agent
def researcher(self) -> Agent:
	return Agent(
		config=self.agents_config['researcher'],
		tools=[SerperDevTool(), ScrapeWebsiteTool()],
		allow_delegation=False,
		verbose=True
	)

@agent
def reporting_analyst(self) -> Agent:
	return Agent(
		config=self.agents_config['reporting_analyst'],
		tools=[ScrapeWebsiteTool()],
		llm=llm_mistral,
		allow_delegation=False,
		
		verbose=True
	)
@agent
def editor(self) -> Agent:
	return Agent(
		config=self.agents_config['editor'],
		allow_delegation=True,
		llm=llm_mistral,
		verbose=True
	)

@task
def research_task(self) -> Task:
	return Task(
		config=self.tasks_config['research_task'],
		agent=self.researcher()
	)

@task
def reporting_task(self) -> Task:
	return Task(
		config=self.tasks_config['reporting_task'],
		agent=self.reporting_analyst(),
		context=[self.research_task()]
	)
@task
def editor_task(self) -> Task:
	return Task(
		config=self.tasks_config['editor_task'],
		agent=self.editor(),
		context=[self.reporting_task()]
	)
@crew
def crew(self) -> Crew:
	"""Creates the Goldapp crew"""
	return Crew(
		agents=self.agents, # Automatically created by the @agent decorator
		tasks=self.tasks, # Automatically created by the @task decorator
		process=Process.sequential,
		verbose=True
		# process=Process.hierarchical, # In case you wanna use that instead https://docs.crewai.com/how-to/Hierarchical/
	)

Thanks.

1 Like

I’m also running into the same issue, using the same model.

It is able to work removing the tools but even without the tools after some iterations fails as well.
I have been able to make it work if I use Claude Haiku when I need to use any tool and Mistral for reasoning.

I’m also running into the same issue with mistral/mistral-small-latest, any news to how solve this problem?

llm = LLM(
    model="mistral/mistral-small-latest",
    temperature=0.1,
    api_key=MISTRAL_API_KEY,
    max_retries=2,
    verbose=True
)

@DanielPuentee I think the issue might stem from the choice of the LLM. Since the Mistral Large didn’t work for @Isaac_Garcia, I doubt that the Mistral Small would work. If the capability of the LLM is the issue, of course.

I have noticed that it works if you use the following model:

mistral=LLM(
  model="mistral/mistral-medium",
  temperature=0.1,
)

Hi, I’ve managed to make it work by modifying the llm.py file in the crewAI repo: [BUG] Mistral LLM Fails with Tools Due to Role Expectation · Issue #2194 · crewAIInc/crewAI · GitHub

The issue was related to how message roles are handled between crewAI and the Mistral API. The usage of role assistant is interpreted differently by Mistral. CrewAI wasn’t using the role correctly when tools are involved.

There’s a PR waiting to be merged. I hope it will be released in the next version! :slight_smile: