How do you use new Deepseek-r1 LLM in CrewAI with OpenRouter?

Has anyone figured out how to use OpenRouter with CrewAI? I am particularly interested in testing the new Deepseek-r1 reasoning LLM that just came out, as it is supposed to be as good as the gpt-o1 model. In fact, it would be nice to test out other models that are also available via OpenRouter.

I got it to work (sort of). Hereā€™s how I configured it in my crew:

deepseek_r1 = LLM(
model=ā€œopenrouter/deepseek/deepseek-r1ā€,
temperature=0,
base_url= ā€œModel Not Found | OpenRouterā€
api_key=os.getenv(ā€œOPENAI_API_KEYā€)
)

My OPENAI_API_KEY is actualy my Openrouter Api Key.

It works whenever I donā€™t use any tools. However, when I use a tool, it breaks (both custom and CrewAI tools). Obviously this is problematic.

Anyone have any ideas on how to solve this problem? I need to use tools in my application.

By the way, the base url (above) isnā€™t ā€œModel Not Found | OpenRouterā€. Discourse keeps changing it to that for some reason. Itā€™s openrouter.ai/api/v1 (preceded by https://)

Iā€™ve been able to use it through the LiteLLM packaged in the install by changing the model to: deepseek/deepseek-reasoner. Its also available via Ollama. One limitation Iā€™ve found is that ā€œdeepseek-reasoner does not support successive user or assistant messagesā€. I can get it to perform a single task but errors out after that.

ā€œERROR:root:LiteLLM call failed: litellm.BadRequestError: DeepseekException - Error code: 400 - {ā€˜errorā€™: {ā€˜messageā€™: ā€˜The last message of deepseek-reasoner must be a user message, or an assistant message with prefix mode on (refer to https://api-docs.deepseek.com/guides/chat_prefix_completion).ā€™, ā€˜typeā€™: ā€˜invalid_request_errorā€™, ā€˜paramā€™: None, ā€˜codeā€™: ā€˜invalid_request_errorā€™}}ā€

Maybe someone else has a solution or LiteLLM / Crew is working on an integration?

1 Like

The very same here, and haven;t found a way to overcome that yet.

Same problem here. R1 not production ready for CrewAI at the moment. Hope this changes soon!

deepseek_reasoner_r1 = LLM(
    model="openrouter/deepseek/deepseek-r1",
    base_url="https://openrouter.ai/api/v1",
    api_key=os.getenv("OPEN_ROUTER_API_KEY"),
)

along with your open router keys.

1 Like