Security Guardrail for sensitive data

@maxmoura @zinyando @Dabnis @Tony_Wood I currently serve as Director – Salesforce Delivery at a leading healthcare organization. Recently, we were tasked with evaluating various tools for implementation within one of our business units that is heavily Salesforce-oriented.

As an organization, we are keen on adopting CrewAI to automate certain operational workflows. However, during our evaluation process, we encountered challenges in clearly articulating and positioning the security mechanisms of CrewAI—particularly in comparison to the security framework presented by Agentforce. Salesforce shared a detailed security layer architecture (shown below), and we were asked to provide an equivalent, well-defined security framework for CrewAI in our upcoming presentation.

image

To support this effort, I would greatly appreciate access to any relevant documentation, presentation materials, whitepapers, architectural references, or literature outlining CrewAI’s built-in security mechanisms and best practices for implementing enterprise-grade security controls.

Your guidance would be extremely helpful in enabling us to present a structured and defensible security architecture.

Hi All the documentation is here https://docs.crewai.com/

Guardrail info is Tasks - CrewAI and some here Hallucination Guardrail - CrewAI

IMHO: Crewai tasks longer to run, but i use it when i need a solid answer.. along with other guardrail techniques.

One of the advantages highlighted by Agentforce is their claim of a zero data retention policy, regardless of whether their proprietary LLM is used or if the implementation leverages third-party models such as Claude or OpenAI. It appears they have established specific contractual terms and conditions with these LLM providers to support this commitment.

This is a great question and something many teams run into when moving from experimentation to enterprise deployment.

Frameworks like CrewAI mainly provide the orchestration layer for agents (tasks, tools, memory, etc.), but enterprise security usually requires additional layers around that.

In practice we tend to see a few different security concerns:

• runtime safety (preventing dangerous tool execution)
• data protection (masking / retrieval policies)
• execution integrity (being able to verify what the agent actually did)
• auditability (structured logs and traces for compliance)

Most frameworks only partially address these out of the box, so organizations often add a governance layer around the agent system.

For example:
agent runtime
→ security / guardrails
→ execution logging
→ audit & compliance layer

Once agents start interacting with enterprise systems like Salesforce or internal APIs, having that execution trace and audit layer becomes particularly important for security reviews.

Curious how strict your organization’s security requirements are (HIPAA, SOC2, etc.) — that usually determines how much governance infrastructure needs to sit around the agent framework.

1 Like

@Bin_Zhang Ours is Healthcare industry with lots of sensitive data.

That makes sense — healthcare usually has much stricter requirements around PHI and auditability.

In environments like that we often see teams add a few additional controls around the agent layer:

• strict data filtering / masking before prompts are sent to the LLM
• tool execution policies (agents can only call specific APIs)
• structured execution logs for every agent step
• a review or validation step before sensitive actions are executed

In many enterprise deployments the agent framework (CrewAI, LangChain, etc.) is just the orchestration layer, and organizations place a governance layer around it to enforce security and compliance policies.

Curious whether your current architecture already includes things like prompt filtering or execution auditing.