How do I pass runtime-only arguments to a custom tool?

How can I inject sensitive runtime parameters like a user ID directly into my custom tool calls—so the model never has to know or reveal them — also, without rebuilding or reconfiguring the agent for every single request?

I looked at other posts but they either seem closed or outdated.

Thanks ahead of time!

# crewai==0.134.0

import os
from pydantic import BaseModel, Field
from crewai import Agent, Crew, Task, Process
from crewai.tools import BaseTool

# ── 1. A super-simple DB lookup tool 
class _Input(BaseModel):
    user_id: str = Field(..., description="INTERNAL user id – must stay private")

class AddressLookupTool(BaseTool):
    name = "Address Lookup Tool"
    description = "Returns the shipping address that matches user_id"
    args_schema = _Input

    def _run(self, user_id: str) -> str: # 
        # obviously stubbed:
        return f"[dummy address for {user_id}]"

lookup_tool = AddressLookupTool(result_as_answer=True)

# ── 2. Agent that owns the tool 
db_agent = Agent(
    role="DB specialist",
    goal="Fetch private customer data without leaking PII",
    backstory="Knows how to query the CRM directly.",
    tools=[lookup_tool],
    llm="gpt-4o-mini", 
    allow_delegation=False,
)

# ── 3. Task that should inject the hidden param 
address_task = Task(
    description="Retrieve the customer’s shipping address.",
    expected_output="Just the street address – no other data.",
    agent=db_agent,
    tools=[lookup_tool],
)

# ── 4. Crew & kickoff
crew = Crew(
    agents=[db_agent],
    tasks=[address_task],
    process=Process.sequential,
    verbose=True,
)

result = crew.kickoff(inputs={"user_id": "u-8732adf"})
print("FINAL RESULT:", result.raw)
1 Like

Welcome to the community

Great question. I would look to create a “token” for the sensitive user id and let it pass that. You can then decode it in the custom tool to the “real” customer id

@Tony_Wood Hey thanks for the quick response. The sensitive information is one part of the problem which I could see a token fixing.

The other issue is having to rely on the model to make a tool call with the correct information. Lets say I’m fetching data from the database thats specific to the user (mortgage info, personal health info, etc). The way I understand it now is I would have to rely on the LLM to pass in the correct user_id or other identifier.

There’s also a small chance a malicious attacker could prompt inject in some way to tool calling on behalf of another user. Less likely, but its 0% if we rely on static context.

I hope these examples help illustrate what I’m looking to solve. Possibly I’m misunderstanding how the templating works in CrewAI

LangGraph’s solution is to provide access to the runtime config through a tool variable. I was hoping CrewAI has something similar. This also lets me build the graph once at startup and just pass in runtime variables.

      config = RunnableConfig(
          configurable={
              "thread_id": ctx.thread_id,
              "user_id": ctx.user_id,
              "conversation_id": ctx.conversation_id,
              **ctx.metadata,
          },
          recursion_limit=15,
      )
      result = await self._graph.ainvoke(
          {"messages": [HumanMessage(content=prompt)]}, config=config # config gets injected
      )

    @tool(parse_docstring=True)
    async def send_message_to_user(
        message: str,
        config: RunnableConfig,
        channels: List[str] = ["in-app"],
    ) -> str:
        """
        Send a notification message to a user via one or more channels.

        Args:
            message (str): The content of the message to send.
            channels (List[str], optional): A list of channels to send the message through (e.g., ["in-app"], ["email"], ["in-app", "email"]). Defaults to ["in-app"].

        Returns:
            str: JSON string confirming message delivery or containing error details.
        """
        try:
            if "configurable" not in config:
                raise ValueError("Configurable context is required for message sending")

            user_id = config["configurable"].get("user_id")
            conversation_id = config["configurable"].get("conversation_id")
1 Like

Hey how are you doing ?
Did you find any solution ? I am facing the same issue right now…

This seems like a good use for flows - you can take in the input as part of the kickoff, add it to state that’s available to the tools. With flow you can also do the tool call manually and pass the input to the crew or next step of the flow.

1 Like

hi, I had the same usecase , and I did something like this :

  1. a Base tool for all the tools using the same auth tokens ,

    class _GmailBaseTool(BaseTool):
    auth_token: Optional[str] = None
    _cached_email: Optional[str] = None

    def _ensure_token(self) -> str:
        if not self.auth_token:
            raise ValueError("Gmail tool requires an OAuth access token")
        return self.auth_token
    
    def _service(self):
        return gmail_service(self._ensure_token())
    
    def _resolve_user_email(self, service) -> str:
        if self._cached_email:
            return self._cached_email
        profile = (
            service.users().getProfile(userId=_GMAIL_USER).execute()
        )
        self._cached_email = profile.get("emailAddress", "")
        return self._cached_email or ""
    

2. a Factory to create agents
def build_calendar_agent(auth: AuthContext):
token = _resolve_token(auth)
tools = [
CalendarListEventsTool(auth_token=token),
CalendarCreateEventTool(auth_token=token),
CalendarUpdateEventTool(auth_token=token),
CalendarDeleteEventTool(auth_token=token),
CalendarRespondToInviteTool(auth_token=token),
CalendarFreeBusyTool(auth_token=token),
]
return build_agent_from_prompt(
name=“Calendar Coordination Agent”,
prompt_key=“calendar”,
tools=tools,
)

  1. And running it by passing that auth context - so you call the factory to create a new agent, and then kickoff()

And just putting the authentication related values as a part of the prompt so that the llm can pass it when It is invoking the tool, I also thought It might be dangerous, and sometimes it might expose the unwanted data to the users, so I wanted to have something more like prebuilt tool where agents does not need to care about authentication

one of the interesting thing that I have experienced was that - I think openai’s deep research method is just letting the model to get the data of the user credential and use it - this is the deep research response that I have experienced. note that I removed the user information part

Connected Source: google_drive

The user has Google Drive access enabled. If a question might relate to documents, notes, spreadsheets, or presentations the user has on their Drive, start by searching Google Drive:

browser.search({ query: "<keywords>", source: google_drive })

Tips for Google Drive search:

  • Use specific keywords likely in the document title or body (project names, collaborators, file types like “doc”, “sheet”, “slides”, or unique phrases).

  • Favor recent documents or those with titles closely matching the query.

  • Before opening a file, consider snippet context if available to ensure relevance.

User information for Google Drive:

{
  "id": "00000000000000000000000",
  "name": "0000000000000000",
  "email": "00000000000000000000"
}