How can I inject sensitive runtime parameters like a user ID directly into my custom tool calls—so the model never has to know or reveal them — also, without rebuilding or reconfiguring the agent for every single request?
I looked at other posts but they either seem closed or outdated.
Thanks ahead of time!
# crewai==0.134.0
import os
from pydantic import BaseModel, Field
from crewai import Agent, Crew, Task, Process
from crewai.tools import BaseTool
# ── 1. A super-simple DB lookup tool
class _Input(BaseModel):
user_id: str = Field(..., description="INTERNAL user id – must stay private")
class AddressLookupTool(BaseTool):
name = "Address Lookup Tool"
description = "Returns the shipping address that matches user_id"
args_schema = _Input
def _run(self, user_id: str) -> str: #
# obviously stubbed:
return f"[dummy address for {user_id}]"
lookup_tool = AddressLookupTool(result_as_answer=True)
# ── 2. Agent that owns the tool
db_agent = Agent(
role="DB specialist",
goal="Fetch private customer data without leaking PII",
backstory="Knows how to query the CRM directly.",
tools=[lookup_tool],
llm="gpt-4o-mini",
allow_delegation=False,
)
# ── 3. Task that should inject the hidden param
address_task = Task(
description="Retrieve the customer’s shipping address.",
expected_output="Just the street address – no other data.",
agent=db_agent,
tools=[lookup_tool],
)
# ── 4. Crew & kickoff
crew = Crew(
agents=[db_agent],
tasks=[address_task],
process=Process.sequential,
verbose=True,
)
result = crew.kickoff(inputs={"user_id": "u-8732adf"})
print("FINAL RESULT:", result.raw)
Welcome to the community
Great question. I would look to create a “token” for the sensitive user id and let it pass that. You can then decode it in the custom tool to the “real” customer id
@Tony_Wood Hey thanks for the quick response. The sensitive information is one part of the problem which I could see a token fixing.
The other issue is having to rely on the model to make a tool call with the correct information. Lets say I’m fetching data from the database thats specific to the user (mortgage info, personal health info, etc). The way I understand it now is I would have to rely on the LLM to pass in the correct user_id or other identifier.
There’s also a small chance a malicious attacker could prompt inject in some way to tool calling on behalf of another user. Less likely, but its 0% if we rely on static context.
I hope these examples help illustrate what I’m looking to solve. Possibly I’m misunderstanding how the templating works in CrewAI
LangGraph’s solution is to provide access to the runtime config through a tool variable. I was hoping CrewAI has something similar. This also lets me build the graph once at startup and just pass in runtime variables.
config = RunnableConfig(
configurable={
"thread_id": ctx.thread_id,
"user_id": ctx.user_id,
"conversation_id": ctx.conversation_id,
**ctx.metadata,
},
recursion_limit=15,
)
result = await self._graph.ainvoke(
{"messages": [HumanMessage(content=prompt)]}, config=config # config gets injected
)
@tool(parse_docstring=True)
async def send_message_to_user(
message: str,
config: RunnableConfig,
channels: List[str] = ["in-app"],
) -> str:
"""
Send a notification message to a user via one or more channels.
Args:
message (str): The content of the message to send.
channels (List[str], optional): A list of channels to send the message through (e.g., ["in-app"], ["email"], ["in-app", "email"]). Defaults to ["in-app"].
Returns:
str: JSON string confirming message delivery or containing error details.
"""
try:
if "configurable" not in config:
raise ValueError("Configurable context is required for message sending")
user_id = config["configurable"].get("user_id")
conversation_id = config["configurable"].get("conversation_id")