i use SpiderTool for some agents with gpt-4o and I get this errors nearly everytime the agents searches the web:
Agent: Professioneller YouTube-Content-Researcher
Thought: The search results did not provide specific information from the Handpan-Portal website related to video conclusions or CTAs. I will access the Handpan-Portal YouTube channel directly to gather potential insights on how they typically conclude their videos related to Handpan and musical instruments.
If you don’t need to use any more tools, you must give your best complete final answer, make sure it satisfy the expect criteria, use the EXACT format below:
Thought: I now can give a great answer
Final Answer: my best complete final answer to the task.
I use a yaml file to define an agent. The defintion is in German but I translated it for you,
topic_researcher:
role: >
…
goal: |
…
Suche im Internet:
Nutz fuer die Suche im Internet das Tool “SerperDevTool”. Waehle danach die passendes Suchergebnisse aus und Crawle deren URLs mit dem Tool “SpiderTool”.
Translation:
Search on the Internet:
Use the tool "SerperDevTool" for searching on the Internet. Then select the appropriate search results and crawl their URLs with the tool "SpiderTool".
backstory: |
…
llm: openai/gpt-4o
I use a yaml file to define the task:
research_task:
description: |
…
1. **Initiale Informationssuche:**
- **Tool:** Verwende das Tool **SerperDevTool**, um im Internet nach relevanten Informationen zum Thema "{topic}" zu suchen.
- **Suchbegriffe:** Nutze kurze und klar verständliche Suchbegriffe, die der typischen Suchweise von Menschen im Internet entsprechen.
- **Auswahl der Ergebnisse:** Wähle die am besten passenden und relevantesten Suchergebnisse aus und notiere deren URLs.
2. **Inhaltsanalyse:**
- **Tool:** Verwende das Tool **Spider scrape & crawl tool**, um die ausgewählten URLs zu scrapen und deren Inhalte vollständig zu extrahieren. WICHTIG: Halte dich EXAKT an die Anleitung des Tools und die Formatvorgaben!
- - **Datenextraktion:** Sammle alle relevanten Informationen aus den gescrapten Webseiten.
Translation:
1. **Initial information search:**
- **Tool:** Use the tool **SerperDevTool** to search the internet for relevant information on the topic "{topic}".
- **Keywords:** Use short and easily understandable keywords that correspond to the typical search behavior of people on the internet.
- **Selection of results:** Choose the most suitable and relevant search results and note their URLs.
2. **Content analysis:**
- **Tool:** Use the tool **Spider scrape & crawl tool** to scrape the selected URLs and extract their content completely. IMPORTANT: Follow the instructions of the tool and the formatting guidelines EXACTLY!
- **Data extraction:** Collect all relevant information from the scraped websites.
expected_output: |
…
agent: topic_researcher
In the crew file ill give the tools to the agent:
@agent
def topic_researcher(self) → Agent:
return Agent(
config=self.agents_config[‘topic_researcher’],
# tools=[MyCustomTool()], # Example of custom tool, loaded on the beginning of file
verbose=True,
tools=[SerperDevTool(), SpiderTool()]
)
@Milan Make sure to set the gpt-4o to all agents! By default, the gpt-4o-mini is used, which is a less capable LLM and may cause errors. Try this and let me know if it fixes the issue.
Hi and thank you very much for taking the time to help me!
I had already placed all research agents on gpt-4o. However, the error still occurred. I just did a test with the research agents on gpt-4o-mini. There were no errors in one run. However, they responded to me in English instead of German