I am continuously failing to integrate embedding model from mistralAI to embedder in memory functionality.
any assistance from your side will be much valuable
Full disclosure, I’m not a Mistral user myself, so take this response just as a pointer to the existing documentation, ok?
Internally, CrewAI uses the Embedchain library for its RAG tools, and this includes the available memory types. So, you should follow the same configuration approach used in Embedchain when setting up memories. You can check out the configuration details here:
- The Embedchain documentation provides an example configuration using Hugging Face and Mistral.
- The CrewAI documentation on Memory includes examples of using custom embedding providers.
Thanks for Responding, @Max_Moura
- In the first url you provided, i couldn’t find how can I integrate and use Embedchain and CrewAI, if you can assist with it, it would be helpful
- I had tried reffering to the CrewAI documentation before but was facing issues in integrating with Hugging Face as well as MistralAI for embedder, i don’t think I could explain that to you if you have tried neither yourself
Thanks again
Edit:
I was able to use MistralAI embedding model by reffering to Embeddings | Mistral AI Large Language Models and using “custom” provider in CrewAI
pardon me for the novice query as it was my first time implicitly using embedding model with an LLM
While I am still having trouble with using hugging face for embedding models
These two are the error messages i am getting for that
- ERROR:root:Error during short_term search: text input must be of type
str
(single example),List[str]
(batch or single pretokenized example) orList[List[str]]
(batch of pretokenized examples). in query.
- Expected embeddings to be a list of floats or ints, a list of lists, a numpy array, or a list of numpy arrays, got {‘error’: ‘Invalid username or password.’} in query.
i am not able to debug any of these two