Crewai cli command is slow to respond

The crewai cli command is super slow to respond on a fresh and clean install.

Sourcing the venv and running time crewai returns this:

real 0m22.214s
user 0m1.423s
sys 0m0.242s

Inspecting further (running python -X importtime -c "import crewai"), I can see that litellm is the villain here, taking ~ 20 seconds to run.

I’m new to the Python world and don’t know how to troubleshoot this further.

My project was created using UV and I’m running it on a Mac.

Appreciate any insights I could get from the community.

I believe the problem is related to network calls. Turning off my internet connection and running the command again makes it go fast.

Finally got it fixed.

It seems that the issue was that litellm fetches a model cost map from an external server during import, and this network request was slow due to my ISP or routing issues.

After multiple attempts to fix the issue, one of them was enabling a VPN on my machine and routing the traffic through it, which did fix the problem.

I’ve also found (while doing some profiling) that the bottleneck resided in the get_model_cost_map function, which led me to the LITELLM_LOCAL_MODEL_COST_MAP environment variable, which in its turn, through some Google search, led me to this page from their docs where it states:

Don’t pull hosted model_cost_map

If you have firewalls, and want to just use the local copy of the model cost map, you can do so like this:

export LITELLM_LOCAL_MODEL_COST_MAP="True"

Note: this means you will need to upgrade to get updated pricing, and newer models.

I hope this can help someone going through the same issue.