Hey! I just started using crewAI and completed the quick start guide and tweaked the agent to create a simple PoC. But, I don’t fully understand what file in the CrewAI framework shows how the LLM APIs calls are made and how the LLMs are used to complete the given agent task? Thanks for any help!
That is the beauty of the framework.
All those details are abstracted away for us. Behind the scenes liteLLM handles the actual interaction with the LLM.
The definition of your agent and tasks becomes the prompt that is used to drive the execution.
Hope this helps.
1 Like
if you want to see all the calls to LLM you can set it up like this:
import litellm
litellm._turn_on_debug()
1 Like
Thanks, this thread was helpful.
I figured out that Crewai has terrible grammar in the system prompt. I hope they fix it soon.
this will be updated, thank you. if it gives you any relief, the models are pretty good at inferring intent even with bad grammar, but still not an excuse.