Hallucination is happening by agents

I build a AMS project using crewai that integrate with JIRA,Servicenow,Github,Knowledge base(RAG) I am facing Hallucination issue, can you please tell how to fix the issue. I am facing lot of issues

That’s pretty cool that you go all those integrations working. Would love to hear how easy it was. Or hard. I’d like to try the same thing for my company.

As for hallucinations. Do you have observability on your LLM calls? This will help you in identifying where in your pipeline you need more focus. Hallucinations is a challenge we all face.

Look at your task prompt and be sure to tell it to only use retrieved sources.

Look at your retrieved chunks to see if it is even possible to answer the question based on the chunks. These first 2 is great first place to look.

Afterwards, you will know if you need to change your embedding model, or prompt expansions.

Then you can check your output response and play with the LLM used in response.

Lastly add citations so you can instill credibility.

Good luck

This is not happening in rag, it is happening with JIRA,Sevicenow, even though there is no related to that project ID and that project ID at all not present it showing that it is creating the ticket, same like ServiceNow also it is generating random INC numbers and URLs

Great work.

For me when i face hallucinations I break down the crew into smaller parts. It will take a little time, but break it down and then find the problematic area.

Hope this helps

I’m curious, what sort of hallucinations they have been for example?

Generally they will get lost if i am too broad.. I call a hallucination anything that goes off my script

Could it be due to temperature setting of the LLM? Which LLM are you using and can u share settings?