I was trying to iterate the analysis of 50+ transcripts sequentially before compiling a complete analysis on it.
Each transcript is a .txt file in my directory “/data”. I made agent to read the directory and iterate through all the transcripts but around the 8th transcript, it will always hit the “iteration limit or time limit” and lose its memory, causing it to go back to read the 1st transcript again.
[2024-09-13 18:16:21][DEBUG]: == [Transcript Orchestrator] Task output: Agent stopped due to iteration limit or time limit.
That model has a 128k token context so you are overflowing it with 8 articles. Again I have not looked at the crewai code, but I am assuming when you read those articles in, they are being added to the crewai overall context going to the llm. Try a model like gemini 1.5 flash that has 1M token context.
If you need more than that you will have to use a rag tool.