Automated Project notebook Gemini

Hi Community,

I am trying to run this notebook: Practical Multi AI Agents and Advanced Use Cases with crewAI - DeepLearning.AI

but using Gemini, how can I set up the Gemini LLM and Gemini Flash 2 model?

Thanks in advance.

  1. you have to create a ‘.env’ file in the root directory of the script you want to run and write your API key.
    GEMINI_API_KEY=jnsdjfjssdsdnsdkjfns

Comment out:

#os.environ['OPENAI_MODEL_NAME'] = 'gpt-4o-mini'

The script will now have access to the key. It will load it as an environment variable when you execute it.

2- in the import section, add ‘LLM’ to the crewai line after Agent, Task. It’s a module that can connect with the LLM providers supported by LiteLLM. (check their website for the list of models)

from crewai import Agent, Task, Crew, LLM

From now on, to initialize a LLM client, all you have to do is this:

llm = LLM(model="<provider>/<model_name>")

in your case:

llm = LLM(model="gemini/gemini-2.0-flash")

3- Replace this line

os.environ['OPENAI_MODEL_NAME'] ='gpt-4o-mini'

4-

# Creating Crew
crew = Crew(
  agents=[
    project_planning_agent,
    estimation_agent,
    resource_allocation_agent
  ],
  tasks=[
    task_breakdown,
    time_resource_estimation,
    resource_allocation
  ],
  verbose=True,
  llm=llm   # 
)

You need to add the set the llm parameter for your agents to use the one you just set

When a a project is created from the template

crewai create crew name` <-- This is not the right command I'm pretty sure but I'm sure you get my point...

OpenAI is set by default. This is why the llm parameter was not visible in the notebook. It is set in the background during the crew creation using the template.

Enjoy

DamnI forget you have to import the environment variable.

from dotenv import load_dotenv
load_dotenv()

Here for you. The proper explaination by Gemini-2.0-flash:


Okay, let's break down this code explanation like we're explaining it to a friend who's never coded before!

**The Big Picture**

Imagine you're building a project and you need help from a team of experts. This code is about setting up that team and telling them how to communicate.  Think of it like hiring a project manager, an estimator, and a resource person and then giving them tasks.

Instead of real people though, we are using AI. This code is using a tool called "CrewAI" to create and manage a group of AI "agents" that can work together on a project.

**Key Parts Explained**

1.  **The Magic Key (.env File):**
    *   Imagine you have a special key that unlocks all the cool AI features. This key is called an API key. The code tells you to create a file named `.env` – think of it as a special little box where you store secret information that the code needs. Inside this box, you'll write something like `GEMINI_API_KEY=your_secret_key_here`. This key tells the AI who you are and allows you to use its services.
    *   **Why?**  It's like having a password to use a service. You don't want it to be visible in your code for everyone to see, so it's stored separately.

2. **Loading the Magic Key:**
    *   The code then tells you to add two lines:
          ```python
            from dotenv import load_dotenv
            load_dotenv()
          ```
    *   `from dotenv import load_dotenv` is like getting the instructions on how to open that special little box.
    *   `load_dotenv()` is like actually opening the box and getting the key. The code can now use your secret API key.

3.  **The "CrewAI" Team (Import Statements & `LLM`):**
    *   `from crewai import Agent, Task, Crew, LLM`
        *   This is like picking out the types of team members you want. `Agent` is the general term for each AI helper, `Task` is what they'll be doing, `Crew` is the entire team, and `LLM` is the tool they'll use to think & communicate. It's like saying we want our team to be made up of agents, tasked to do various stuff and they will communicate using LLM.
    *   Think of `LLM` as the language that all of the AI Agents in your team understand. It's like using Google Translate - each member says something in a specific language and it translates it for everyone to understand.
    *   The code then says you can set up the language model for your team using this line:
          ```python
          llm = LLM(model="gemini/gemini-2.0-flash")
          ```
      *   This is the part where you decide what type of language, "Google Translate" will be working with. `llm = LLM(...)` is like saying, "Okay, we're going to use the 'gemini' language with the 'gemini-2.0-flash' flavour of it. All communication, thinking will happen with this language model.

4.  **Building the Team (Crew Creation):**
    *   ```python
            crew = Crew(
               agents=[
                   project_planning_agent,
                   estimation_agent,
                   resource_allocation_agent
               ],
               tasks=[
                    task_breakdown,
                    time_resource_estimation,
                    resource_allocation
               ],
               verbose=True,
               llm=llm
           )
      ```
    *   This section is where you actually create your "crew."
    *   **`agents=[...]`**: Here, you list the different AI helpers you want in your team. Think of `project_planning_agent`, `estimation_agent`, and `resource_allocation_agent` as AI employees that have their own specific skill sets.
    *   **`tasks=[...]`**: This is where you list the jobs you want them to do, like "task_breakdown" (breaking down a big task into smaller parts), "time_resource_estimation", and "resource_allocation".
    *   **`verbose=True`**: This says, "Tell me everything! All the details!" so you can see what's going on while the team works . It's like having a live transcript of their conversation
    *   **`llm=llm`**:  This is crucial! It's like giving each of your AI agents a specific language to use. This says, "Hey, agents! Use the gemini/gemini-2.0-flash 'language' so you all understand each other!".

5.  **Why not "OpenAI" by default anymore?**
    *   The explanation mentions that when creating a crew from a template, "OpenAI" was used by default in the background. But now we want to use `gemini`. The `llm=llm` parameter tells your agents not to use the background settings, but instead to use the language model you set earlier in the code `llm = LLM(model="gemini/gemini-2.0-flash")`. This empowers your AI Agents to communicate in `gemini` as opposed to default communication format.

**In Simple Terms:**

This code helps you:

1.  Set up a team of AI helpers, each with different skills.
2.  Give them specific tasks to complete.
3.  Ensure they're all communicating using the same language.
4.  Store your API key safely and load it so they can work.
5.  Choose your language model so your AI agents can function in your desired format.

**Analogy Time!**

Imagine you're organizing a birthday party:

*   **The .env file** is like a secret box where you store the password to your email so you can send out the party invites.
*   **The CrewAI setup** is like hiring the party planner, the budget manager, and the delivery team (your AI agents).
*   **The Language model** you chose ( Gemini) is the "language", the method of communication the AI workers are familiar with.
*   **The tasks** are like booking the venue, deciding the budget, and ordering the cake.
*   The `llm=` parameter is like making sure each of the members in the party planning work group is aware which language and communication style to adhere to.

This code does the behind-the-scenes work to get that party (your project) planned and running smoothly!

Thanks so much, however, in addition I added in .env this line: MODEL = gemini/gemini-1.5-flash and it finally worked.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.