Hello there
I’m trying to use a local llama.cpp
server with CrewAI, but I keep getting the following error from the llama server, which also causes the CrewAI code to crash:
got exception: {"code":500,"message":"Cannot have 2 or more assistant messages at the end of the list.","type":"server_error"}
srv log_server_r: request: POST /v1/chat/completions 127.0.0.1 500
I’ve tried to host gemma3, deepseek and qwen. and got the same issue.
Is there a solution to this issue? I also tried using Ollama, but encountered a different problem, possibly related to LiteLLM.
Thanks!