Has anyone figured out how to use OpenRouter with CrewAI? I am particularly interested in testing the new Deepseek-r1 reasoning LLM that just came out, as it is supposed to be as good as the gpt-o1 model. In fact, it would be nice to test out other models that are also available via OpenRouter.
I got it to work (sort of). Hereās how I configured it in my crew:
deepseek_r1 = LLM(
model=āopenrouter/deepseek/deepseek-r1ā,
temperature=0,
base_url= āModel Not Found | OpenRouterā
api_key=os.getenv(āOPENAI_API_KEYā)
)
My OPENAI_API_KEY is actualy my Openrouter Api Key.
It works whenever I donāt use any tools. However, when I use a tool, it breaks (both custom and CrewAI tools). Obviously this is problematic.
Anyone have any ideas on how to solve this problem? I need to use tools in my application.
By the way, the base url (above) isnāt āModel Not Found | OpenRouterā. Discourse keeps changing it to that for some reason. Itās openrouter.ai/api/v1 (preceded by https://)
Iāve been able to use it through the LiteLLM packaged in the install by changing the model to: deepseek/deepseek-reasoner. Its also available via Ollama. One limitation Iāve found is that ādeepseek-reasoner does not support successive user or assistant messagesā. I can get it to perform a single task but errors out after that.
āERROR:root:LiteLLM call failed: litellm.BadRequestError: DeepseekException - Error code: 400 - {āerrorā: {āmessageā: āThe last message of deepseek-reasoner must be a user message, or an assistant message with prefix mode on (refer to https://api-docs.deepseek.com/guides/chat_prefix_completion).ā, ātypeā: āinvalid_request_errorā, āparamā: None, ācodeā: āinvalid_request_errorā}}ā
Maybe someone else has a solution or LiteLLM / Crew is working on an integration?
The very same here, and haven;t found a way to overcome that yet.
Same problem here. R1 not production ready for CrewAI at the moment. Hope this changes soon!
deepseek_reasoner_r1 = LLM(
model="openrouter/deepseek/deepseek-r1",
base_url="https://openrouter.ai/api/v1",
api_key=os.getenv("OPEN_ROUTER_API_KEY"),
)
along with your open router keys.
I have deployed deep seek:r1 using modal. I had to hack their example script and trying to use it with CrewAI used up my monthly free credits so I added my credit card I have hit my billing limit and not managed to get this to work using cursor IDE. The AI is going around in circles blaming CrewAI for not sending the data in the format required by the LLM so I am getting loads of 500 errors. I would love to know how to use this model from CrewAI. For the other LLMs I have tried I have used OllamaLLM, OpenAILLM & GeminiLLM classes from langchain so it is a shame their is not (yet) one I can use.
broļ¼ä½ åÆä»„čÆčÆčæē§ę¹å¼ļ¼tryļ¼å äøŗęę²”å é±ę¾ē¤ŗä½é¢äøč¶³
def test_deepseek():
deepseek_reasoner_r1 = LLM(
model="custom_openai/deepseek-reasoner",
base_url="https://api.deepseek.com",
api_key="xxx",
)
print(deepseek_reasoner_r1.call(messages=[{"role": "user", "content": "äøå½ēé¦é½?"}]))
The dependency on model availability lies with what is available under LiteLLM, unfortunately. We are working on alternatives for now.
I actually did manage to fix this in the end
Has anyone been able to get it to work with tools? I got it to work with openrouter, but I canāt use tools (which obviously limits its usefulness).
This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.
DeepSeek is not made to work with tool calling, yet. There are variants that have been created recently that can do tool calling.