How ScrapeWebsiteTool works with 2 Agents

Hello I Have a Agent with 2 tools called Google Search tool and Map search tool. Both are working with serper API of google. Now the issue is my agent extract good knowledge from links and also create a required list of places. But I don’t know weather my Agent can read that websites or not. Because if my Agent is not able to explore that website content then I have to use ScrapeWebsiteTool. Also tell me what if I create a new Agent that only extract useful content from this websites? So My both Agents can work together or not? Because here My first agent get many useful websites and if my Second Agent not explore all of them then It will create an issue.

So guide me what could be the great way to do it. if my first agent without ScrapeWebsiteTool can explore the websites that it has provided then there is no issue.

Hello Khush,

Going through your issue, the current implementation may not be the best way to go about using agents. It is always better to define agents with a narrow task, which results in more precise and consistent output.

In your case, It would be better to define separate agents and make them use one tool each, define tasks separately according to the agent and run them, This should yield better result.

Make sure you order them correctly.

Thank you for your attention!

Yes, I want to generate a itinerary using AI agents. for that first I have given the Search Google and Search map tool so that they can extract best places that meets user interests. But if I provide them separately then it will difficult to handle both of them. Also I wants to know weather I need ScrapeWebsiteTool or not.

So please suggest me what to do? weather have to use ScrapeWebsiteTool or not? if yes then how because links that I will get are intermediate results.

No issues Khush Patel,

I understand your concern, and frankly the choice depends on your needs.

Google search APIs generally returns metadata such as titles, snippets, and URLs. It doesn’t typically provide the full content of a webpage. If you are happy with such results multi agent system won’t really be necessary.

If more in depth itinerary is what you are after, Then I would suggest you to build a multi agent system, where the first agent searches and finds the best URL’s and the second scrapes through these websites either all at once or iteratively keeping in mind the amount of data you need and LLM’s context limit.

Hope things are clearer now!