Can you advise me maybe where I should start to build AI Agent examples, because I think the video although good, it might be missing out on many details.
My main issue is not understanding how to deal with this error:
ImportError: cannot import name ‘LangSmithParams’ from ‘langchain_core.language_models.chat_models’ (/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py)
Hi @Charles_Mo,
One of the best places to start is the Getting Started with CrewAI.
Reason: There are several LLM Agent frameworks and each has its own philosophy of how they ‘term’ structured, terminology, etc. If you have decided on CrewAI then the best place to start is at the ‘getting started’ section in that link above.
All LLM Agent frameworks have one common denominator: However they structure the user level, and whatever terms they use: TAsk; Agent, etc. Under the hood they all basically do the same thing:
They create prompts and context from the definitions of: Task’s, Agents’, sometimes from memory to create a single text prompt that is fed into an LLM to perform a task. Yes Tasks & Agents are just named containers where text is collected from the user. Under the hood they have different ways of manipulating the text to form a prompt.
That’s the basics of all LLM Agent type systems.
So do the ‘getting started’ to learn how CrewAI does this.
We are all here to help you if needed.
Best of luck
Your issue: Have you installed the requirements?
N.B. I have noticed some discussions on this discord relating to issues with different versions of Python, may be worth a look. In the search box (top right) type in ‘python’ …
okay, I have finally followed the crewai tutorial to build your first ai crew, but I am getting a lot of “OpenAIError”.
I know why, but not sure how to fix. I am attempting to use Groq LLM, so I have define the API in the .env, but it is clear I need to ‘tell’ the code to use Groq API instead of OpenAI. However, it is not clear to me where I do this in the code files.
Maybe I should just pay OpenAI, but eventually I would like to use Llama LLM next. Help on this would be very grateful?
Awesome thanks @rokbenko . Sorry to ask you a follow up, have you tried connecting to llama, which I have installed locally on my Mac and it would be great to run a open source LLM which Llama has quite a few variants and it is free.
Yes, I’ve tried. I suppose you want to run one of the smaller Llama LLMs? Running these will result in poor(er) CrewAI performance. See this GitHub thread.