Human in the loop - workaround

Hi everyone,

I have been experimenting with CrewAI over the past year and completed the two DeepLearning courses. They helped me understand the framework well.

I am now building an internal application for my organization. The main use case is content generation with a human in the loop.

During development, I ran into a limitation. HITL input is handled through the terminal. That does not work for a production web environment.

I tested several approaches, including Streamlit and Chainlit, but I decided to build the system with Django. The goal is to provide a proper frontend where users can review outputs, provide feedback to specific agents, and continue crew execution.

The only solution that worked for me was overriding CrewAgentExecutorMixin._ask_human_input and handling the interaction asynchronously using Celery and Redis.

I would appreciate feedback on this approach. Has anyone implemented HITL in a web environment without relying on the terminal? I am especially interested in alternative patterns or cleaner architectural solutions.

Looking forward to your thoughts.

Congratulations on finishing the two courses. Have you checked out the latest documents for Human in the Loop? And are you on the latest version?

1 Like

Hi Paul, thanks for your response.
I have already given it a try with the flows as well (forgot to mention it). However, with flows, I lose the crew advantages like memory and also the context between my tasks.
My current implementation has 9 agents, where each one shares their outputs. What I noticed with the flows implementation is that I make a direct call to the LLM instead of having a crew. Am I missing something here?
If you need further clarification to help you understand my scenario, please let me know.

The cleanest workaround I’ve seen is treating the human as a gated tool, not an agent.

i.e. agent pauses → emits a structured request → external approval → resume with injected context.

If the human is modeled as an agent, you usually get loops or role confusion.

Hi, after using it as a tool in my try with flows, I noticed that the context I send to the LLM is very large. This is the reason I try to find a workaround with the native human_input from CrewAI.

Can you explain a little bit more your suggestion and workaround?

Hi everyone, does anybody have test a workaround?