@tool(“search_report”) # decorator def search_report(search_query: str) → str: “”" Search the report for a specific query to extract a specific field. Input must be a single string representing the search query. “”" # Explicitly call the base tool with the correct argument name try: return base_report_tool._run(search_query) except Exception as e: return f"Error searching: {str(e)}"
extraction_task = Task(
description=(
"1. ….”
),
expected_output=“… specified fields.”,
tools=[search_report],
output_pydantic=OutputFields,
agent=analyst_agent
)
analyst_agent = Agent(
role=‘Data Extractor’,
goal=f’Extract precise fields from the report.’,
backstory=(
"When using the tool, "
"provide ONLY the ‘search_query’ parameter as a string. "
“Do not add extra parameters like ‘document_type’.”
“IMPORTANT: Only use the tool provided’.”
),
tools=[search_report],
llm=ollama_llm,
allow_delegation=False,
verbose=True
)
Hi @SG391
The issue is likely caused by the tool name mismatch between what the @tool decorator registers and what the LLM calls. In your code, the decorator name is "search_report", but the underlying TXTSearchTool has its own internal name.
In v1.10.0+, stricter JSON argument parsing and validation was introduced, along with v1.10.1’s “resolve name collisions” fix — meaning tool name matching is now more strictly enforced.
The likely root cause: the LLM is calling the tool by a slightly different name (e.g., with spaces or different casing), and the stricter validation in v1.10.1 now rejects it. Try these fixes:
Match the tool name exactly — ensure the @tool decorator name matches what the agent will call:
If using Ollama, the LLM may be generating the wrong tool name in its output. Try adding explicit instructions in the agent’s backstory to use the exact tool name search_report.
The simplest fix is option 2 — use the TXTSearchTool directly without the custom wrapper, since the wrapping layer introduces a name mismatch.