I’m working on a project using CrewAI and would like some guidance on how to implement a conversational agent that interacts continuously with users, similar to a customer service representative. The agent should adapt its questions based on previous user responses, maintaining the conversation until the user ends the session.
Is there a native way to implement this type of “human-in-the-loop” interaction in CrewAI? If anyone has developed something similar, I’d greatly appreciate examples or suggestions on how to approach it.
Thanks in advance for any insights or advice you can share!
@guilegarcia The purpose of a conversational multi-agent system (i.e., chatbot) is different than a multi-agent system using HITL:
Chatbot is meant, well, for chatting. You don’t have control over the process of the crew getting the final output. For example, if the crew gets something wrong, the final output will be incorrect.
User’s input → CrewAI → CrewAI’s output
HITL is meant to guide the crew. You have control over the process of the crew getting the final output. For example, if the crew gets something wrong, you can tell the crew what’s wrong, and the crew will take that into account. Consequently, the final output will be correct.
If you want to make a chatbot using CrewAI that maintains the conversation until the user ends the session, then simply set memory=True to the crew as follows:
my_crew = Crew(
agents=[...],
tasks=[...],
memory=True,
verbose=True, # Necessary for memory to work
)
If you want to implement HITL using CrewAI, then simply set human_input=True to the task as follows:
Okay, but I currently have a scenario where one ‘agent’ should behave like a chatbot to collect information, such as for airline tickets, gathering details like origin, destination, round trip, etc. With the information in hand, it would pass it on to the rest of the crew to proceed with the work. In this context, does it make more sense to separate the crews? Wouldn’t the first ‘agent’ actually be an ‘agent’?
I see the value in this concept. Essentially (if I’m understanding correctly) you want to be able to chat with an AI managing agent that has the ability to trigger flows and allow you to provide the context needed for the flows, with the ability to retain memory across the conversations.
This is something that I think is possible to do now, but I’m not sure if it’s something that is built into crewAI or if it requires an additional LLM-based chat that has the tools to trigger flows and kickoffs.
have the task write the output to a file. you read the file. then what about just writing a simple tool that takes input from the console( after you have read the output in the file) and use it to format your question precisely then tools=[MyCustomTool(result_as_answer=True)], the output being the question you have. you can tell the agent to use this tool immediately after it writes an output file. once you get your question into the context seems like you could have instruction for a different agent or task to recycle the doc it wrote to the file previously.
use conditional flow to quit when you enter a response that indicates you’re satisfied with the document that has been written to the file.
In my case, I have used human_input=true, and it works. But when I take the crew to expose it over api using fast api or langserve when I run a Post method it works. And the crew agents begins ,as verbose is activated to true, I can see in terminal the logs, the problem begins when I need to provide a feedback,which I can do in terminal and that is not user friendly, think of the time I will use that api on mobile apps or web app, I cannot come every time in console to edit the feedback, I mean it should send the response for feedback over that api and user add it and send back and It continue. So please any help, I would appreciate for it
I don’t seem to understand how can we implement a simple chatbot with a never ending loop of agent<->human. I want to implement a conversation chatbot that can reply to user questions and optionally decide to use tools.Right now I can only kickoff a crew given a Task, but I cannot create this kind of infinite loop.
This only solution I can think of is to have a Flow with 2 states. One state awaits for user question. Once got it updates the Flow state (append) then transits to another state where I’m actually kicking off a crew with a Task. Where my Task contains the user question. Once I have the response I send it back to the user and I transit again in the previous state (in a infinite loop fashion).
The conversation history is passed each time to the Crew and is updated as the internal state of the Flow.
Are there any better solutions? I think that use cases that are not one-shot are important
We are working on building something like this using slack as the interface. I think this may require a standard llm chat bit that can trigger crew and flow kickoffs after having gathered the requirements from the human. I’ll let you know how it goes. We plan on deploying this within 2 weeks.
I’m using Streamlit for the interface in my project, and I’m trying to integrate an agent with human_input=True . While human_input=True seems to work fine, the interaction from the agent only appears in the console and not in the Streamlit interface. Has anyone managed to get this interaction to display directly in the Streamlit UI? Any advice on integrating this smoothly would be greatly appreciated! Thank you!
Thanks for your response,
This is the part of the key components that I’m using in my Streamlit application:
# Initialize RAGLLM model
if "RAGLLM_model" not in st.session_state:
print("Loading RAGLLM model")
# Reading params
with open(config_path+"ragllm_params.yml", "r") as stream:
try:
params = yaml.safe_load(stream)
except yaml.YAMLError as exc:
print(exc)
# Creating RAGLLM module
st.ragllm = rag_creation(directorypath = params["datapath"],
chunk_size = params["chunk_size"],
chunk_overlap = params["chunk_overlap"],
model = params["model"],
temperature = params["temperature"],
credentials_path = params["credentials_path"],
assistant_role_instruction = params["assistant_role_instruction"])
if st.ragllm is None:
st.stop()
st.session_state["RAGLLM_model"] = params["model"]
print("RAGLLM model loaded")
# Initialize chat history
if "messages" not in st.session_state:
st.session_state.messages = []
# Display chat messages from history on app rerun
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
# Accept user input
if prompt := st.chat_input("What is up?"):
# Add user message to chat history
st.session_state.messages.append({"role": "user", "content": prompt})
# Display user message in chat message container
with st.chat_message("user"):
st.markdown(prompt)
# Display assistant response in chat message container
with st.chat_message("assistant"):
response = st.write_stream(response_generator(prompt, st.ragllm, st.session_state["RAGLLM_model"]))
# Add assistant response to chat history
st.session_state.messages.append({"role": "assistant", "content": response})
I am also working on a similar project that includes making a conversational customer service chatbot. Were you able to find a solution to make a conversational agent?
Yes, we got it working regarding the conversation history that’s actually already provided through the Slack interface and so all we need to do is have a Slack tool that will load up that conversation based on the thread that’s being sent over. It’s actually pretty simple you could do the same thing if you were to store the messages to and from the initial agent that’s receiving the instructions.
I’m also interested in doing the same thing with a stream lit chat app within the application itself in an effort to be able to not only Test the initial contact within the crew, but also I’ll be able to perform testing and training on individual agents as well as crews.
This would be a huge benefit if it were built into crew AI, but I could see it being an add-on or plug-in that people could add to their existing crew apps, which would be pretty awesome