Regarding how CrewAI agents/task handles scenarios where the input tokens exceeds the LLMs context window?

I had provided an input to a task which was approx. 400000 tokens (greater than the LLMs 200000 tokens context window). At first, the task threw me an error stating the context limit is exceeded but it started to summarize automatically continuing the execution. Is that a designed behavior in CrewAI to automatically summarize the input if the context window exceeds the LLMs tokens? How do we typically handle large input tokens?