Handling LLM Errors in Hierarchical CrewAI Process with Callbacks

Hey @nmvega, how’s it going?

If I got you right, you’re looking for a better level of observability, correct? If that’s the case, I recommend you check out two other threads where this topic came up. In both, I suggested using custom listeners to receive CrewAI bus events in a less intrusive way:

In that last thread, @tonykipkemboi mentioned that they have more advanced tracing capabilities available on the enterprise platform.

1 Like