Thanks @maxmoura ! I followed your instructions with a little modification and everything worked as you outlined though there was some extra output from an earlier crew run as well.
Here’s my output -
$ uv run main.py
/home/mre/dox/repos/timebillcrew/.venv/lib/python3.10/site-packages/pydantic/fields.py:1093: PydanticDeprecatedSince20: Using extra keyword arguments on `Field` is deprecated and will be removed. Use `json_schema_extra` instead. (Extra keys: 'required'). Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.11/migration/
warn(
/home/mre/dox/repos/timebillcrew/.venv/lib/python3.10/site-packages/embedchain/embedder/ollama.py:27: LangChainDeprecationWarning: The class `OllamaEmbeddings` was deprecated in LangChain 0.3.1 and will be removed in 1.0.0. An updated version of the class exists in the :class:`~langchain-ollama package and should be used instead. To use it run `pip install -U :class:`~langchain-ollama` and import as `from :class:`~langchain_ollama import OllamaEmbeddings``.
embeddings = OllamaEmbeddings(model=self.config.model, base_url=config.base_url)
/home/mre/dox/repos/timebillcrew/.venv/lib/python3.10/site-packages/alembic/config.py:592: DeprecationWarning: No path_separator found in configuration; falling back to legacy splitting on spaces, commas, and colons for prepend_sys_path. Consider adding path_separator=os to Alembic config.
util.warn_deprecated(
Using Tool: Search MySQL Database Table Content
/home/mre/dox/repos/timebillcrew/.venv/lib/python3.10/site-packages/chromadb/types.py:144: PydanticDeprecatedSince211: Accessing the 'model_fields' attribute on the instance is deprecated. Instead, you should access this attribute from the model class. Deprecated in Pydantic V2.11 to be removed in V3.0.
return self.model_fields # pydantic 2.x
--- MySQLSearchTool.run() Chunks ---
Relevant Content:
Relevant Content:
(2, 'Cookie', 'Casey', datetime.date(2013, 11, 13)
(191, datetime.date(2025, 1, 21), 'Jane Doe', 'Johnson', ', 'Design Development', 'Review lighting for presentation. Review schemes and add dog kennel furniture', Decimal('1.25'), datetime.timedelta(seconds=51300), datetime.timedelta(seconds=55800), 1, 0, 0, 'Mary', 'Jane', ', 1, Decimal('0.00'), Decimal('0.00'), Decimal('0.00'), Decimal('0.00'), 'USD', ')
(172, datetime.date(2025, 1, 13), 'Jane Doe', 'Johnson', ', 'Selections', 'shop for more lighting options for the dining area and kitchen, look for any more chest ideas, put on power point , go over fabrics with mary', Decimal('3.25'), datetime.timedelta(seconds=37800), datetime.timedelta(seconds=49500), 1, 0, 0, 'Bobby', 'Sue', 'Designer', 1, Decimal('0.00'), Decimal('0.00'), Decimal('50.00'), Decimal('162.50'), 'USD', ')
The “extra” relevant content looks like it came from my earlier crew runs where I’m trying to build a crew to create some billing reports. I tried running
$ uv run crewai reset-memories --all
[2025-08-28 14:33:04][INFO]: [Crew (e818fcc9-c987-4fe7-9914-59f9e4ab1158)] Task Output memory has been reset
[Crew (e818fcc9-c987-4fe7-9914-59f9e4ab1158)] Reset memories command has been completed.
but the pets relevant output continues to display the “remembered” billing data but it does consistently find the owner for Cookie (Casey).
And then I’m going to have to figure out how to get rid of those irritating warnings … 
For completeness, here is my modified main.py -
from alternative_mysql_search_tool import MySQLSearchTool
import os
# # Satisfy both LiteLLM and Embedchain
# os.environ["GEMINI_API_KEY"] = "<YOUR_KEY>"
# os.environ["GOOGLE_API_KEY"] = os.environ["GEMINI_API_KEY"]
# using local models and local mysql database
embedchain_config = {
"embedder": {
"provider": "ollama",
"config": {
"model": "nomic-embed-text",
# "task_type": "ENHANCEMENT_DOCUMENT"
}
}
}
mysql_tool = MySQLSearchTool(
db_uri="mysql://vet:******@localhost:3306/pets",
table_name="cats",
config=embedchain_config
)
#
# Test if `MySQLSearchTool.run()` works standalone
#
user_question = "Who owns Cookie?"
relevant_chunks = mysql_tool.run(user_question)
print("--- MySQLSearchTool.run() Chunks ---")
print(relevant_chunks)
print("------------------------------------")
where the only real modifications were to comment out the api keys since I am using a local ollama model and local database; and then to configure the embedder as my local ollama nomic-embed-text model. (I had to comment out the task_type as it generates a fatal error regardless of what value it is set to).
But when I transfer this knowledge/learning/example to my billing app I consistently get that search_query error -
🚀 Crew: crew
└── 📋 Task: 2658bfdb-93c9-4c35-b9fe-4a3445ec3256
Status: Executing Task...
└── 🔧 Failed Search MySQL Database Table Content (1)
╭─────────────────────────────────────────────────────────── Tool Error ───────────────────────────────────────────────────────────╮
│ │
│ Tool Usage Failed │
│ Name: Search MySQL Database Table Content │
│ Error: Arguments validation failed: 1 validation error for MySQLSearchToolSchema │
│ search_query │
│ Field required [type=missing, input_value={'query': 'SELECT project...4564', 'metadata': {}}}}, input_type=dict] │
│ For further information visit https://errors.pydantic.dev/2.11/v/missing │
│ Tool Args: │
│ │
│ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
which is why I asked how to provide the search_query for MySQLSearchToolSchema. My current thinking is that you don’t provide the search_query, instead the crewai framework fills it in based on the goals and expected outputs mentioned in the agents and tasks - but that isn’t working.