Building your own AI Agent but hitting my head on "ImportError: cannot import name 'LangSmithParams'"

HI All,

I am new to CrewAI, I have been using Lightening.ai as my IDE to start experimenting wit AI Agents.

I have been following Matthew Berman video, here’s the link to one I have been watching on The right way to build AI Agents on CrewAI

Can you advise me maybe where I should start to build AI Agent examples, because I think the video although good, it might be missing out on many details.

My main issue is not understanding how to deal with this error:

ImportError: cannot import name ‘LangSmithParams’ from ‘langchain_core.language_models.chat_models’ (/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py)

Hi @Charles_Mo,
One of the best places to start is the Getting Started with CrewAI.
Reason: There are several LLM Agent frameworks and each has its own philosophy of how they ‘term’ structured, terminology, etc. If you have decided on CrewAI then the best place to start is at the ‘getting started’ section in that link above.

All LLM Agent frameworks have one common denominator: However they structure the user level, and whatever terms they use: TAsk; Agent, etc. Under the hood they all basically do the same thing:
They create prompts and context from the definitions of: Task’s, Agents’, sometimes from memory to create a single text prompt that is fed into an LLM to perform a task. Yes Tasks & Agents are just named containers where text is collected from the user. Under the hood they have different ways of manipulating the text to form a prompt.

That’s the basics of all LLM Agent type systems.

So do the ‘getting started’ to learn how CrewAI does this.

We are all here to help you if needed.

Best of luck :hand_with_index_finger_and_thumb_crossed:

Your issue: Have you installed the requirements?

N.B. I have noticed some discussions on this discord relating to issues with different versions of Python, may be worth a look. In the search box (top right) type in ‘python’ …

This is great @Dabnis and thank you for taking the time to write this. I will keep this post alive with my progress, which I hope will help others.

okay, I have finally followed the crewai tutorial to build your first ai crew, but I am getting a lot of “OpenAIError”.

I know why, but not sure how to fix. I am attempting to use Groq LLM, so I have define the API in the .env, but it is clear I need to ‘tell’ the code to use Groq API instead of OpenAI. However, it is not clear to me where I do this in the code files.

Maybe I should just pay OpenAI, but eventually I would like to use Llama LLM next. Help on this would be very grateful?

@Charles_Mo The trick is the base_url parameter. By setting it, you can use the OpenAI’s library with Groq.

The following code should work:

from crewai import Agent
from langchain_openai import OpenAI

my_llm = OpenAI(
    base_url="https://api.groq.com/openai/v1",
    api_key=os.environ.get("GROQ_API_KEY"),
)

my_agent = Agent(
    ...,
    llm=my_llm,
)
1 Like

Awesome thanks @rokbenko . Sorry to ask you a follow up, have you tried connecting to llama, which I have installed locally on my Mac and it would be great to run a open source LLM which Llama has quite a few variants and it is free.

Yes, I’ve tried. I suppose you want to run one of the smaller Llama LLMs? Running these will result in poor(er) CrewAI performance. See this GitHub thread.

FYI: I am running Llama 3.2 instruct in LM Studio and it performs ‘almost’ as good as GPT-4o-mini: Llama3.2 3B is only 3.42 Gb

Yes I have twin 4090’s, but even when I set GPU offload to ‘0’ I still have a functioning LLM


Note the GPU offload

If you have low memory issues, de-select ‘keep model in memory’.

Just had a ‘quick’ look at that github thread and they refer to models that are larger than Llama3.2-3b-instruct!

Worth a try!