Hey folks, I need help with implementing prompt caching in CrewAI. I went through the codebase and found that CrewAI uses LiteLLM, which eventually calls the Anthropic API. The challenge is that Anthropic’s prompt caching typically involves passing headers and additional parameters along with the message.
`client = anthropic.Anthropic()`
`response = client.messages.create(`
`model="claude-3-5-sonnet-20241022",`
`max_tokens=1024,`
`system=[`
`{`
`"type": "text",`
`"text": "You are an AI assistant tasked with analyzing literary works. Your goal is to provide insightful commentary on themes, characters, and writing style.\n",`
`},`
`{`
`"cache_control": {"type": "ephemeral"}`
`}`
`],`
`messages=[{"role": "user", "content": user_prompt."}],`
`)`
However, I can’t figure out how to customize the messages that CrewAI sends in the background for the multi-agent system I’m running. Does anyone know how I can add content or modify these background message parameters to enable prompt caching?
Would appreciate any insights or suggestions! Thanks
Relevant codes: