Tools "don't exist"

I am trying to run a crew w/ this custom tool:

from crewai_tools import BaseTool
import json

class EmailDateExtractor(BaseTool):
    name: str = "Email Date Extractor"
    description: str = "Extracts sent date from email."

    def _run(self, email_json: str) -> str:
        return json.loads(email_json)['date']

And, I am instantiating it and stuffing it to a task:

    @task
    def email_sent_date_extraction(self) -> Task:
        return Task(
            config=self.tasks_config['extract_email_sent_date'], # type: ignore
            tools=[EmailDateExtractor(result_as_answer=True)]
        )

However, when I run the crew, I get (in red):

Action 'the action to take, only one name of [Email Date Extractor], just the name, exactly as it's written.' don't exist, these are the only available Actions:
 Tool Name: Email Date Extractor(email_json: str) -> str
Tool Description: Email Date Extractor(email_json: 'string') - Extracts sent date from email. 
Tool Arguments: {'email_json': {'title': 'Email Json', 'type': 'string'}}

For thoroughness, here’s the task config:

extract_email_sent_date:
  description: >
    Extract the date from the customer email sent to ****. This is the email: 
    {email}
  expected_output: >
    A date
  agent: an_agent

Using:

CrewAI Version: 0.55.2
CrewAI Tools Version: 0.12.0
Python Version: 3.12.5
Environment: Ubuntu 22.04.4
Expectation: Tool to be used.

I’ve searched the internet, docs, deeplearning.ai course, and crewAI source code. I have also tried different attribute options (like lowering the temperature near-zero) but to no avail. Please help me get past this hump.

pyproject.toml

[tool.poetry]
name = "***-determination"
version = "0.1.0"
description = "Determines if incoming email is *** or ***-pending"
authors = ["mayonesa <*****@****.com>"]

[tool.poetry.dependencies]
python = ">=3.12,<=3.13"
crewai = { extras = ["tools"], version = ">=0.12.0" }

[tool.poetry.scripts]
determination = "determination.main:run"
run_crew = "determination.main:run"
train = "determination.main:train"
replay = "determination.main:replay"
test = "determination.main:test"

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

src/determination/main.py

#!/usr/bin/env python
import sys
from determination.aog_determination_crew import DeterminationCrew

# This main file is intended to be a way for your to run your
# crew locally, so refrain from adding necessary logic into this file.
# Replace with inputs you want to test with, it will automatically
# interpolate any tasks and agents information

def run() -> None:
    """
    Run the crew.
    """
    # TODO: externalize as YAML
    inputs = {
        "email": {
            'from': 'jane.doe@customer.com',
            'to': 'crc@****.com',
            'subject': 'Engine light on',
            'date': '2024-9-12',
            'body': (
                'Hi ****, our maintenance department has informed us that the engine light is on',
                'and, upon further investigation, it looks like the oil filter is in need of replacement.',
                'Please advise.'
            ),
        },
    }
    DeterminationCrew().crew_().kickoff(inputs=inputs)


def train() -> None:
    """
    Train the crew for a given number of iterations.
    """
    inputs = {
        'data': 'value',
    }
    try:
        DeterminationCrew().crew_().train(n_iterations=int(sys.argv[1]), filename=sys.argv[2], inputs=inputs)

    except Exception as e:
        raise Exception(f"An error occurred while training the crew: {e}")

def replay() -> None:
    """
    Replay the crew execution from a specific task.
    """
    try:
        DeterminationCrew().crew_().replay(task_id=sys.argv[1])

    except Exception as e:
        raise Exception(f"An error occurred while replaying the crew: {e}")

def test() -> None:
    """
    Test the crew execution and returns the results.
    """
    inputs = {
        'info': ['part number', 'contract ID']
    }
    try:
        DeterminationCrew().crew_().test(n_iterations=int(sys.argv[1]), openai_model_name=sys.argv[2], inputs=inputs)

    except Exception as e:
        raise Exception(f"An error occurred while replaying the crew: {e}")

src/determination/determination_crew.py

from crewai import Agent, Task, Crew
from crewai.project import CrewBase, agent, crew, task, llm
from langchain_community.llms import SagemakerEndpoint
from langchain_core.language_models.llms import LLM
import boto3

from determination.mixtral8x7b_content_handler import Mixtral8x7bContentHandler
from determination.tools.email_date_extractor import EmailDateExtractor
from determination.tools.status_determinator import StatusDeterminator


@CrewBase
class DeterminationCrew():
    """InitialInfo crew"""
    agents_config = 'config/agents.yaml'
    tasks_config = 'config/tasks.yaml'
    sagemaker = boto3.client("sagemaker-runtime", region_name="us-east-2")

    @agent
    def an_agent(self) -> Agent:
        return Agent(
			config=self.agents_config['an_agent'], # type: ignore
            max_iter=2,
		)

    @task
    def email_sent_date_extraction(self) -> Task:
        return Task(
            config=self.tasks_config['extract_email_sent_date'], # type: ignore
            tools=[EmailDateExtractor(result_as_answer=True)]
        )

    @task
    def determination(self) -> Task:
        return Task(
            config=self.tasks_config['determine'], # type: ignore
            tools=[StatusDeterminator()]
        )

    @crew
    def crew_(self) -> Crew:
        return Crew(
            agents=self.agents, # type: ignore
            tasks=self.tasks, # type: ignore
            verbose=True,
        )

    @llm
    def sagemaker_mixtral8x7b(self) -> LLM:
        return SagemakerEndpoint(
            endpoint_name="mixtral8x7bv3", 
            client=self.sagemaker, 
            content_handler=Mixtral8x7bContentHandler(),
            model_kwargs={"temperature": 1e-10},
        )

src/determination/mixtral8x7b_content_handler.py:

from typing import Dict
from langchain_community.llms.sagemaker_endpoint import LLMContentHandler

import json

class Mixtral8x7bContentHandler(LLMContentHandler):
    content_type = "application/json"
    accepts = "application/json"

    def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:
        input_str = json.dumps({"inputs": prompt, **model_kwargs})
        return input_str.encode('utf-8')

    def transform_output(self, output: bytes) -> str:
        response_json = json.loads(output.read().decode("utf-8")) # type: ignore
        return response_json[0]["generated_text"]

src/determination/config/agents.xml:

an_agent:
  role: >
    Customer Request Processor
  goal: >
    Give professional, polite, and helpful attention to incoming customer requests.
    Use the correct tools available to you to correctly determine the AOG status for
    the customer request.
  backstory: >
    You are an excellent customer advocate. You have a record of always going the extra mile
    making your customer feel well taken care of. You excel at every task given to you. You
    have won CRC employee of the year award for the past 3 years.
  llm: sagemaker_mixtral8x7b
  verbose: true

src/determination/confit/tasks.yaml:

extract_email_sent_date:
  description: >
    Extract the date from the customer email sent to ***. This is the email: 
    {email}
  expected_output: >
    A date
  agent: an_agent

determine_aog:
  description: >
    Given the date when customer email was sent, determine what the *** status is.
  expected_output: >
    The *** status. The only AOG statuses are:
    - ***
    - *** Pending
  agent: an_agent

src/determination/tools/email_date_extractor.py:

from crewai_tools import BaseTool
import json

class EmailDateExtractor(BaseTool):
    name: str = "Email Date Extractor"
    description: str = "Extracts sent date from email."

    def _run(self, email_json: str) -> str:
        return json.loads(email_json)['date']

src/determination/tool/status_determinator.py:

from crewai_tools import BaseTool
from datetime import date

class StatusDeterminator(BaseTool):
    name: str = "*** Status Determinator"
    description: str = "Determines what the status of the AOG is based on the email sent date."

    def _run(self, email_sent_iso_date: str) -> str:
        email_sent_date = date.fromisoformat(email_sent_iso_date)
        today = date.today()
        email_sent_before_today = email_sent_date < today
        status: str = '***' if email_sent_before_today else '*** Pending'
        return status
1 Like

Hi @mayonesa,

Thanks for posting your question.

From a quick review, try modifying the tool’s name to snake_case to avoid any spacing issues, like this:

from crewai_tools import BaseTool
import json

class EmailDateExtractor(BaseTool):
    name: str = "email_date_extractor"  # Change to snake_case for tool recognition
    description: str = "Extracts sent date from email."

    def _run(self, email_json: str) -> str:
        return json.loads(email_json)['date']

Also update the task config in task.yaml to reference the tool;

extract_email_sent_date:
  description: >
    Extract the date from the customer email sent to ***. This is the email: 
    {email}
  expected_output: >
    A date
  agent: an_agent
  tools: 
    - email_date_extractor  # Ensure this matches the tool name in snake_case

After the chages, re-run the crew and see if the issue is resolved. If not, please let us know so we can do further troubleshooting for you.

1 Like

Thank you so much @tonykipkemboi! I will be proposing crewAI to my management and customers tomorrow by demoing this. I am super excited.
I changed the tool names to camel_case. But, it does complain when I try to reference it in tasks.yaml because (I’m guessing) I’m not using the @tool annotation (but they’re definitely instantiated and referenced in the task instantiations in determination_crew.py). The following is the result of using camel_case for the tool names:

You are Customer Request Processor
. You are an excellent customer advocate. You have a record of always going the extra mile making your customer feel well taken care of. You excel at every task given to you. You have won CRC employee of the year award for the past 3 years.

Your personal goal is: Give professional, polite, and helpful attention to incoming customer requests. Use the correct tools available to you to correctly determine the AOG status for the customer request.

You ONLY have access to the following tools, and should NEVER make up tools that are not listed here:

Tool Name: email_date_extractor(email_json: str) -> str
Tool Description: email_date_extractor(email_json: 'string') - Extracts sent date from email. 
Tool Arguments: {'email_json': {'title': 'Email Json', 'type': 'string'}}

Use the following format:

Thought: you should always think about what to do
Action: the action to take, only one name of [email_date_extractor], just the name, exactly as it's written.
Action Input: the input to the action, just a simple python dictionary, enclosed in curly braces, using " to wrap keys and values. 

Action 'the action to take, only one name of [email_date_extractor], just the name, exactly as it's written.' don't exist, these are the only available Actions:
 Tool Name: email_date_extractor(email_json: str) -> str
Tool Description: email_date_extractor(email_json: 'string') - Extracts sent date from email. 
Tool Arguments: {'email_json': {'title': 'Email Json', 'type': 'string'}}



> Finished chain.
 [2024-09-11 21:18:40][DEBUG]: == [Customer Request Processor
] Task output: Agent stopped due to iteration limit or time limit.

I also tried lowering the temperature to 0.01 but no love.

I’m no expert but have you had a lot of success tool calling with mixtral 8x7? I could only get consistent results from it on the general inference side if I also used something like nexusraven v2 for tool calling. Did you try running this with gpt 4 or some similar closed source llm?

I cannot use external LLM services.

Can you use command r and nexusraven v2 at the same time?
If so, use nexus as function_calling_llm and instruct it in the system prompt to always use valid json format when using tools or calling functions. Use the latest version of command-r for everything else.
Again no expert here but that has worked fir me in difficult cases.

The problem may be resolved by providing prompts and model_kwargs appropriate to the particular LLM.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.