Anyone here who managed to use perplexity api either as an LLM or tool? How did you find its performance? Any help rewarding the implementation?
1 Like
hey, were you able to implement it? I need it for my project, some help would be appreciated
it seems perplexity changed something on their end, the crews add custom stop parameters which aren’t supported by perplexity so perplexity is just broken for me.
Nevermind, it’s not a perplexity issue at all, in fact it seems to be an issue with litellm.
There is an issue tracked in the crewai github for it, and someone suggested a monkeyhack that gets around it for now.
I tried that hack and unfortunately just got back an OpenAI 500 error instead of a LiteLLM 400 error. You can find more detail in my reply to this topic.