Context limit and hallucinations when reading from file

There is a big issue with the way crewai reads from files (JSON in this case). It might be partly due to the large size of the file (201KB).

When I request the task prompt to summarise or interpret a file after reading and count the total number of specific elements it always reports incorrect information in my case. For example, in my case I request it to count the number of elements in the json file and it reports 100, whereas the true count is 98. And it reports all kinds of wrong stats. This occurs when the task using the langchain FileManagementToolkit, aswell as when it uses a custom tool.

Here is the config for the LLM:
return LLM(
model=“bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0”,
temperature=0.1,
max_tokens=2048,
top_p=1,
stop=[“\n\nHuman”],
)

Is there a limit on the file size agents can comfortably read?
What is the potential reason for it not being able to read accurately?

I do have memory=True, but caching=False for the agent too.

I would appreciate your input on this as it’s quite unusable in this way.

@Moshrul_Hussain Regarding hallucinations, set allow_code_execution to True for the agent. This allows the agent to write and run code when executing tasks, which should help improve the performance. Default is False.

Regarding the context window limit hit, the agent should respect it since respect_context_window is set to True by default.