Hey! I just started using crewAI and completed the quick start guide and tweaked the agent to create a simple PoC. But, I don’t fully understand what file in the CrewAI framework shows how the LLM APIs calls are made and how the LLMs are used to complete the given agent task? Thanks for any help!
That is the beauty of the framework.
All those details are abstracted away for us. Behind the scenes liteLLM handles the actual interaction with the LLM.
The definition of your agent and tasks becomes the prompt that is used to drive the execution.
Hope this helps.
1 Like
if you want to see all the calls to LLM you can set it up like this:
import litellm
litellm._turn_on_debug()