Hi,
i use SpiderTool for some agents with gpt-4o and I get this errors nearly everytime the agents searches the web:
I encountered an error while trying to use the tool. This was the error: 1 validation error for SpiderToolSchema
params
Field required [type=missing, input_value={‘url’: ‘https://www.theg…itar’, ‘mode’: ‘scrape’}, input_type=dict]
For further information visit https ://errors.pydantic.dev/2.9/v/missing.
Tool Spider scrape & crawl tool accepts these inputs: Tool Name: Spider scrape & crawl tool
Tool Arguments: {‘url’: {‘description’: ‘Website URL’, ‘type’: ‘str’}, ‘params’: {‘description’: ‘Set additional params. Options include:\n- limit
: Optional[int] - The maximum number of pages allowed to crawl per website. Remove the value or set it to 0
to crawl all pages.\n- depth
: Optional[int] - The crawl limit for maximum depth. If 0
, no limit will be applied.\n- metadata
: Optional[bool] - Boolean to include metadata or not. Defaults to False
unless set to True
. If the user wants metadata, include params.metadata = True.\n- query_selector
: Optional[str] - The CSS query selector to use when extracting content from the markup.\n’, ‘type’: ‘Union[dict[str, Any], NoneType]’}, ‘mode’: {‘description’: ‘Mode, the only two allowed modes are scrape
or crawl
. Use scrape
to scrape a single page and crawl
to crawl the entire website following subpages. These modes are the only allowed values even when ANY params is set.’, ‘type’: ‘Literal[scrape, crawl]’}}
Tool Description: Scrape & Crawl any url and return LLM-ready data.
I encountered an error while trying to use the tool. This was the error: 2 validation errors for SpiderToolSchema
params
Input should be a valid dictionary [type=dict_type, input_value=‘{“metadata”: false’, input_type=str]
For further information visit Redirecting...
mode
Input should be ‘scrape’ or ‘crawl’ [type=literal_error, input_value=‘“scrape”’, input_type=str]
For further information visit https ://errors.pydantic.dev/2.9/v/literal_error.
Tool Spider scrape & crawl tool accepts these inputs: Tool Name: Spider scrape & crawl tool
Tool Arguments: {‘url’: {‘description’: ‘Website URL’, ‘type’: ‘str’}, ‘params’: {‘description’: ‘Set additional params. Options include:\n- limit
: Optional[int] - The maximum number of pages allowed to crawl per website. Remove the value or set it to 0
to crawl all pages.\n- depth
: Optional[int] - The crawl limit for maximum depth. If 0
, no limit will be applied.\n- metadata
: Optional[bool] - Boolean to include metadata or not. Defaults to False
unless set to True
. If the user wants metadata, include params.metadata = True.\n- query_selector
: Optional[str] - The CSS query selector to use when extracting content from the markup.\n’, ‘type’: ‘Union[dict[str, Any], NoneType]’}, ‘mode’: {‘description’: ‘Mode, the only two allowed modes are scrape
or crawl
. Use scrape
to scrape a single page and crawl
to crawl the entire website following subpages. These modes are the only allowed values even when ANY params is set.’, ‘type’: ‘Literal[scrape, crawl]’}}
Tool Description: Scrape & Crawl any url and return LLM-ready data.
I have already tried to specify and define the task from the agent on how the tool is used. However, that did not work.
Does somebody has an idea how to solve this?
Best regards and thanks for any help!
Milan