Iteration limit or time limit issue when reading too many files

I was trying to iterate the analysis of 50+ transcripts sequentially before compiling a complete analysis on it.

Each transcript is a .txt file in my directory “/data”. I made agent to read the directory and iterate through all the transcripts but around the 8th transcript, it will always hit the “iteration limit or time limit” and lose its memory, causing it to go back to read the 1st transcript again.

 [2024-09-13 18:16:21][DEBUG]: == [Transcript Orchestrator] Task output: Agent stopped due to iteration limit or time limit.

I have enabled memory=true on this crew.

Any advice on how to overcome this?

try setting max_iter on your agent (it defaults to 25 so set it try setting it higher)

2 Likes

Just out of curiosity, how many characters are there in the first 8 transcripts?

thanks! I’ll try it out!

average 30-mins worth of zoom transcript each (avg 23,750 chars)

Whats the context size for the model you are using?

i’m using the default for gpt-4o-mini but the error states 8192 tokens if i remember correctly

i’m using the default for gpt-4o-mini but the error states 8192 tokens if i remember correctly

That model has a 128k token context so you are overflowing it with 8 articles. Again I have not looked at the crewai code, but I am assuming when you read those articles in, they are being added to the crewai overall context going to the llm. Try a model like gemini 1.5 flash that has 1M token context.
If you need more than that you will have to use a rag tool.

1 Like

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.