I have been experimenting with CrewAI over the past year and completed the two DeepLearning courses. They helped me understand the framework well.
I am now building an internal application for my organization. The main use case is content generation with a human in the loop.
During development, I ran into a limitation. HITL input is handled through the terminal. That does not work for a production web environment.
I tested several approaches, including Streamlit and Chainlit, but I decided to build the system with Django. The goal is to provide a proper frontend where users can review outputs, provide feedback to specific agents, and continue crew execution.
The only solution that worked for me was overriding CrewAgentExecutorMixin._ask_human_input and handling the interaction asynchronously using Celery and Redis.
I would appreciate feedback on this approach. Has anyone implemented HITL in a web environment without relying on the terminal? I am especially interested in alternative patterns or cleaner architectural solutions.
Hi Paul, thanks for your response.
I have already given it a try with the flows as well (forgot to mention it). However, with flows, I lose the crew advantages like memory and also the context between my tasks.
My current implementation has 9 agents, where each one shares their outputs. What I noticed with the flows implementation is that I make a direct call to the LLM instead of having a crew. Am I missing something here?
If you need further clarification to help you understand my scenario, please let me know.
Hi, after using it as a tool in my try with flows, I noticed that the context I send to the LLM is very large. This is the reason I try to find a workaround with the native human_input from CrewAI.
Can you explain a little bit more your suggestion and workaround?