I am running a CrewAI process in a hierarchical setup, but sometimes I encounter LLM errors during execution.
I would like to know:
- Is there a way to add a callback functionality that triggers automatically whenever an LLM error occurs?
- If yes, how can I implement it within the CrewAI framework?
Any guidance, best practices, or examples would be greatly appreciated.
Thanks in advance!
What do you want to achieve with the callback?
There isn’t anything like that as far as I know but if you can describe what you aim to achieve people might have better context to advise you on what to do.
Thanks for your response!
The main issue I am facing is that I occasionally get LLM errors like:
- “Received None or empty response from LLM call.”
- “An unknown error occurred. Please check the details below. Error details: Invalid response from LLM call - None or empty.”
If I run the same Crew with the same task multiple times, sometimes it works fine, but other times I get the above errors.
I want to handle these errors more gracefully—perhaps with a retry mechanism, a fallback response, or a way to detect and log these errors properly within the CrewAI framework.
Is there a recommended approach for handling such intermittent LLM failures? Would implementing a callback or exception handling mechanism help in this case?
Any guidance or best practices would be much appreciated!