Tool issue with version 1.10.1

HI All,

I am having issue with using tools in version 1.10.1. It works perfectly in version 1.9.3… Here is my code:

base_report_tool = TXTSearchTool(txt=temp_file_path,config=text_tool_config)

@tool(“search_report”) # decorator
def search_report(search_query: str) → str:
“”"
Search the report for a specific query to extract a specific field.
Input must be a single string representing the search query.
“”"
# Explicitly call the base tool with the correct argument name
try:
return base_report_tool._run(search_query)
except Exception as e:
return f"Error searching: {str(e)}"

extraction_task = Task(
description=(
"1. ….”
),
expected_output=“… specified fields.”,
tools=[search_report],
output_pydantic=OutputFields,
agent=analyst_agent
)



analyst_agent = Agent(
role=‘Data Extractor’,
goal=f’Extract precise fields from the report.’,
backstory=(
"When using the tool, "
"provide ONLY the ‘search_query’ parameter as a string. "
“Do not add extra parameters like ‘document_type’.”
“IMPORTANT: Only use the tool provided’.”
),
tools=[search_report],
llm=ollama_llm,
allow_delegation=False,
verbose=True
)

I keep getting ‘Tool name does not match’…

Any help would be much appreciated. Thanks!

Hi @SG391
The issue is likely caused by the tool name mismatch between what the @tool decorator registers and what the LLM calls. In your code, the decorator name is "search_report", but the underlying TXTSearchTool has its own internal name.

In v1.10.0+, stricter JSON argument parsing and validation was introduced, along with v1.10.1’s “resolve name collisions” fix — meaning tool name matching is now more strictly enforced.

The likely root cause: the LLM is calling the tool by a slightly different name (e.g., with spaces or different casing), and the stricter validation in v1.10.1 now rejects it. Try these fixes:

  1. Match the tool name exactly — ensure the @tool decorator name matches what the agent will call:
@tool("search_report")
def search_report(search_query: str) -> str:
  1. Avoid wrapping the built-in tool — assign TXTSearchTool directly instead of wrapping it in a custom @tool decorator:
search_report = TXTSearchTool(txt=temp_file_path, config=text_tool_config)
analyst_agent = Agent(tools=[search_report], ...)
  1. If using Ollama, the LLM may be generating the wrong tool name in its output. Try adding explicit instructions in the agent’s backstory to use the exact tool name search_report.

The simplest fix is option 2 — use the TXTSearchTool directly without the custom wrapper, since the wrapping layer introduces a name mismatch.

Thanks @major_tiwari ,

But no luck I am afraid. Even with option 3 where I am getting: ‘Input should be a valid dictionary or instance of BaseTool…’

@SG391 did you try with option 2?. Also, if you can share ur code base or give the github link, I can try.

@major_tiwari , I did try option 2, and I am still getting the same error: ‘Tool name does not match’