Speed up the execution

I have built a crew with a single agent and a single task. Uses 2 tools, search and scrapper. The LLM inference itself is fast however i see it spend a lot of time at certain instances during execution (iterations) eg. after doing a search, it takes a lot of time sometimes to move ahead. Also after errors. I am not using memory and i havent set any limits on the number of calls or iterations.

Is there a way to speed up the entire process.

Monitoring helps. One of the issues was that the input tokens limit/min was getting exhausted.

Do you use the rpm parameter?