#1 Podcast Prepper
Description 
Python example using CrewAI, Anthropic Claude 3.5 Sonnet as an LLM, and Exa as a tool.
It’s designed for podcast hosts, helping them:
- research a guest,
- prepare detailed insights about the guest, and
- suggest relevant questions for an upcoming episode with the guest.
Refer to the
rok_benko_report.md
file for an example of the final report, if the entered guest is Rok Benko.
Problem Addressed 
The screenshot below shows Google Trends data for the search term podcast on YouTube worldwide from 2008 to the present, highlighting a clear long-term upward trend in podcast popularity.
But according to the Reddit thread, preparing for a podcast can take several hours!
Nice_Butterscotch995 saying:
I would say the prep work averaged three to four hours per episode or thereabouts, longer if they were an author or filmmaker. At the same time, I would harvest links and photos for my show notes. I wouldn’t call the process annoying, but it was definitely a time suck.
Fuzzy_Mic_2021 saying:
My opinion is that there is no such thing as too much prep. For a 10 minute interview, I’ll do 4 hours of research. I love the process.
Learning Goal 
- Solving the addressed problem with CrewAI: We’ll demonstrate how the CrewAI framework can drastically reduce the time required for podcast preparation by leveraging a multi-agent AI system. By employing CrewAI Flows, we cut down the preparation time from 4 hours to just ≈3 minutes, achieving a 98.75% reduction in time spent, all for ≈$0.13 in total!
[!NOTE]
This $0.13 covers all expenses, including both the Anthropic LLM and Exa tool. For more details about the cost, refer to the Behind the Scenes section.
Getting Started 
[!NOTE]
The instructions are specific to Windows. For macOS or Linux, please use the corresponding commands for your operating system.
- Clone the repository:
git clone https://github.com/rokbenko/ai-playground.git
- Change the directory:
cd ai-playground/crewai-tutorials/1-Podcast_prepper/podcast_prepper
- Create an
.env
file in the root directory to set up your environment variables (Note: Refer to the example below for the required environment variables.) - Install the dependencies using Poetry:
poetry install
(Note: This may take a few minutes. Be patient!) - Activate the virtual environment using Poetry:
poetry shell
- Run the CrewAI flow using Poetry:
poetry run flow
[!IMPORTANT]
Your.env
file should contain the following environment variables:ANTHROPIC_API_KEY = "sk-ant-xxxxxxxxxxxxxxxxxxxxxxxxx" EXA_API_KEY = "xxxxxxxxxxxxxxxxxxxxxxxxx"
Tech Stack 
The project uses the following main tech stack:
For more detailed information, please refer to the poetry.lock
file.
Workflow 
- Run the CrewAI flow using Poetry:
poetry run flow
- Enter the guest’s first and last name
- CrewAI working… (Note: You don’t need to do anything.)
- When the Guest Research crew collects data on the guest, it will prompt you for human input (i.e., feedback)
- CrewAI working… (Note: You don’t need to do anything.)
- The project’s final output is saved in the format
<name>_<surname>_report.md
(Note: For an example, refer to therok_benko_report.md
file, generated if the entered guest is Rok Benko.)
After running the CrewAI flow, a terminal input prompt will appear, as shown in the screenshot below. Enter the guest’s first and last name. For example, I entered Rok Benko.
[!WARNING]
After the Guest Research crew collects data on the guest, it will prompt you for human input. At this stage, you can make any necessary corrections or simply respond with something like Everything is fine, continue. It’s crucial to provide input, even if you’re satisfied with the report, as the flow will not continue without your confirmation. You have the flexibility to edit, add, or request the deletion of any information gathered by the crew. Additionally, you can specify what you don’t like in the report, and the crew will rerun the process, making adjustments to improve the report based on your feedback.CrewAI’s human-in-the-loop integration is particularly useful for the project in the following scenarios:
- Identifying incorrect data: The Guest Research crew may collect data about a person who shares the same first and last name but is not your guest. For instance, the crew might list a social media profile link for someone else with the same name.
- Solution: You can provide input like, Change the Twitter link to x.com, remove the Facebook link completely as this is not the Rok Benko who will be my guest, and add his YouTube channel link I found online: https://www.youtube.com/@rokbenko.
- Requesting a full report: The Guest Research crew may occasionally return a short summary instead of a complete markdown report on the guest.
- Solution: Simply respond with, Write a full markdown report.
[!IMPORTANT]
Don’t use phrases like My guest is <Name> <Surname>. This will not generate the expected report due to the configuration of the Exa web search tool. Only enter the guest’s first and last name.
<Name> <Surname>
Guest: <Name> <Surname>
I will guest <Name> <Surname>
My guest is <Name> <Surname>
[!NOTE]
Keep in mind that this project is primarily a proof-of-concept. While it works well most of the time, occasional errors may occur, or the CrewAI output may not meet expectations. In such cases, rerunning the flow or using the human-in-the-loop feature more effectively should help resolve the issue.
Behind the Scenes 
Project
The project was built with CrewAI Flows by running crewai create flow podcast_prepper
. Flows simplify CrewAI workflow creation by enabling you to easily chain together multiple crews, manage and share state between different tasks, and implement conditional logic, loops, and branching within your workflows, all while ensuring dynamic and responsive interactions.
It consists of two crews, each designed to handle specific aspects of podcast preparation:
- Guest Research crew
- Description: The Guest Research crew is responsible for conducting an in-depth investigation into the podcast guest. It gathers comprehensive information about the guest’s background, career milestones, public image, and more, ensuring a well-rounded profile.
- Agents in this crew:
- Senior Researcher: This agent conducts an in-depth investigation into the podcast guest.
- Tasks in this crew:
- Research: This task involves gathering detailed information about the guest, focusing on aspects such as background, education, career milestones, and more. The output is a structured markdown report.
- Tools used by this crew:
- Exa: This tool searches the web for information about the guest.
- Log: You can review an example log from the Guest Research crew, generated when I entered Rok Benko as a guest, by checking the
log_guest_research_crew.txt
file.
- Questions Research crew
- Description: The Questions Research crew is responsible for formulating a set of relevant and thought-provoking questions for the guest. It ensures the questions are designed to be engaging and encourage a personal dialogue, often exploring philosophical themes.
- Agents in this crew:
- Senior Journalist: This agent creates insightful questions for the podcast guest.
- Tasks in this crew:
- Journalism: This task involves forming questions based on the guest’s report made by the previous crew. The output is a markdown list of questions, phrased in the first person and structured chronologically.
- Tools used by this crew: No tools are specified for this crew.
- Log: You can review an example log from the Questions Research crew, generated when I entered Rok Benko, by checking the
log_questions_research_crew.txt
file.
Poetry
The pyproject.toml file includes two commands:
flow
(full command:poetry run flow
), which runs the flow.plot
(full command:poetry run plot
), which plots the flow.
Cost
The total cost depends on the flow run, but it’s typically ≈$0.13 when using the Anthropic Claude 3.5 Sonnet LLM. As of writing this, it costs $3 per million input tokens and $15 per million output tokens. See the Anthropic Pricing page. Even if the total cost reaches $0.14 or $0.15, consider the value of your time. Would you trade $0.15 for 4 hours of manual work?
Here’s a detailed cost breakdown based on the log_token_usage.txt
file:
- Anthropic Claude 3.5 Sonnet LLM costs: $0.121734
- Guest Research crew: $0.096675
- Prompt tokens used: 14,410 → $0.04323 (
14,410 * (3 / 1.000.000)
) - Completion tokens used: 3,563 → $0.053445 (
3,563 * (15 / 1.000.000)
)
- Prompt tokens used: 14,410 → $0.04323 (
- Questions Research crew: $0.025059
- Prompt tokens used: 3,908 → $0.011724 (
3,908 * (3 / 1.000.000)
) - Completion tokens used: 889 → $0.013335 (
889 * (15 / 1.000.000)
)
- Prompt tokens used: 3,908 → $0.011724 (
- Guest Research crew: $0.096675
- Exa tool costs (Note: Estimated, refer to the note below.): $0.015
This brings the total cost to $0.136734. Again, I want to emphasize that the total cost depends on the flow run, but it’s typically ≈$0.13.
[!TIP]
I tried using a cheaper, less capable LLM, but errors can occur since these models work less effectively with CrewAI (source). Even if no errors happen, the final report tends to be of lower quality. For this reason, I suggest using one of the top LLMs, like Anthropic Claude 3.5 Sonnet, to ensure both reliability and high-quality final report.
[!NOTE]
Currently, the Exa dashboard doesn’t provide visibility into the exact cost per run. However, these expenses are generally small. Over many runs building this project, I spent only $0.65 on Exa. Estimating the cost per run at $0.015 is likely a reasonable approximation.
Plot
CrewAI Flows also enables us to visualize the workflow by plotting the flow. I configured the command for plotting the flow as poetry run plot
. The screenshot below is derived from the crewai_flow.html
file generated by the command.
[!TIP]
For higher quality, click on the image.
Demonstration 
For demonstration purposes, I ran the project and entered Rok Benko as a guest. You can review the outputs from the following files:
log_guest_research_crew.txt
file for the output from the Guest Research crew,log_questions_research_crew.txt
file for the output from the Questions Research crew,log_token_usage.txt
file for the token usage, androk_benko_report.md
file for the final report on the guest.