Has anyone figured out how to use OpenRouter with CrewAI? I am particularly interested in testing the new Deepseek-r1 reasoning LLM that just came out, as it is supposed to be as good as the gpt-o1 model. In fact, it would be nice to test out other models that are also available via OpenRouter.
I got it to work (sort of). Hereās how I configured it in my crew:
deepseek_r1 = LLM(
model=āopenrouter/deepseek/deepseek-r1ā,
temperature=0,
base_url= āModel Not Found | OpenRouterā
api_key=os.getenv(āOPENAI_API_KEYā)
)
My OPENAI_API_KEY is actualy my Openrouter Api Key.
It works whenever I donāt use any tools. However, when I use a tool, it breaks (both custom and CrewAI tools). Obviously this is problematic.
Anyone have any ideas on how to solve this problem? I need to use tools in my application.
By the way, the base url (above) isnāt āModel Not Found | OpenRouterā. Discourse keeps changing it to that for some reason. Itās openrouter.ai/api/v1 (preceded by https://)
Iāve been able to use it through the LiteLLM packaged in the install by changing the model to: deepseek/deepseek-reasoner. Its also available via Ollama. One limitation Iāve found is that ādeepseek-reasoner does not support successive user or assistant messagesā. I can get it to perform a single task but errors out after that.
āERROR:root:LiteLLM call failed: litellm.BadRequestError: DeepseekException - Error code: 400 - {āerrorā: {āmessageā: āThe last message of deepseek-reasoner must be a user message, or an assistant message with prefix mode on (refer to https://api-docs.deepseek.com/guides/chat_prefix_completion).ā, ātypeā: āinvalid_request_errorā, āparamā: None, ācodeā: āinvalid_request_errorā}}ā
Maybe someone else has a solution or LiteLLM / Crew is working on an integration?
The very same here, and haven;t found a way to overcome that yet.
Same problem here. R1 not production ready for CrewAI at the moment. Hope this changes soon!
deepseek_reasoner_r1 = LLM(
model="openrouter/deepseek/deepseek-r1",
base_url="https://openrouter.ai/api/v1",
api_key=os.getenv("OPEN_ROUTER_API_KEY"),
)
along with your open router keys.