Memory Customization

Hi! Im trying to use the memory for Crew. But I get some unexpected errors.

from crewai import Crew, Process

indent preformatted text by 4 spaces

Rag_agent=Crew(
agents=[context_retriver_agent,senior_api_developer_agent],
tasks=[context_retrieval_task,api_development_task],
memory=True,
verbose=True,
embedder={
    "provider": "huggingface",
    "config": {
        "model": "sentence-transformers/all-mpnet-base-v2",
    }},
long_term_memory=LongTermMemory(
      storage=LTMSQLiteStorage(
          db_path="/content/long_term/mydatabase.db"
      )
  ),
  short_term_memory=ShortTermMemory(
      storage=RAGStorage(
          type='short_term',
          path="./short",
          embedder_config={"provider": "huggingface",
    "config": {
        "model": "sentence-transformers/all-mpnet-base-v2",
    }},
         
      ),
  entity_memory=EntityMemory(
      storage=RAGStorage(
          type="entity_storage",
          path="./entity",
        
          embedder_config={"provider": "huggingface",
              "config": {
              "model": "sentence-transformers/all-mpnet-base-v2",
    }},
         
      )

These are the errors I got.
ERROR:root:Error during entity_storage save: Request URL is missing an ‘http://’ or ‘https://’ protocol. in add.

MEMORY ERROR: An error occurred while saving to LTM: unable to open database file

For longterm memory do we need to create database?
This might be a silly question but I cant figure this out.

  1. regarding the long_term memory make sure this directory exist “/content/long_term/”.

  2. for short_term and entity memory, iam having the same issue of “Request URL is missing an ‘http://’ or ‘https://’ protocol. in add.”

i managed to overcome the issue of “Request URL is missing an ‘http://’ or ‘https://’ protocol. in add.” by configuring the embedder like this.

from crewai import Crew, Process
Rag_agent=Crew(
    agents=[  context_retriver_agent,senior_api_developer_agent],
    tasks=[context_retrieval_task,api_development_task],
    memory=True,
    verbose=True,
    knowledge_sources=[string_source],
    embedder={"provider": "huggingface",
        "config": {
            "api_url": "https://api-inference.huggingface.co/models/sentence-transformers/all-mpnet-base-v2",
            "headers":{"Authorization": f"Bearer {api_key}"}}},
    
    long_term_memory=LongTermMemory(storage=LTMSQLiteStorage(db_path="/content/long_term/mydatabase.db")),
        
    short_term_memory=ShortTermMemory(storage=RAGStorage(type='short_term',path="./short",
                                                                     embedder_config={"provider": "huggingface",
                                                                                      "config": { "api_url": "https://api-inference.huggingface.co/models/sentence-transformers/all-mpnet-base-v2",
                                                                                                 "headers":{"Authorization": f"Bearer {api_key}"}}})),
    entity_memory=EntityMemory(storage=RAGStorage(type="entity_storage",path="./entity",
                                                    embedder_config={"provider": "huggingface",
                                                                     "config": {
                                                                         "api_url": "https://api-inference.huggingface.co/models/sentence-transformers/all-mpnet-base-v2",
                                                                         "headers":{"Authorization": f"Bearer {api_key}"}}})),
        
              
)

But I got three sets of error on three different run.

  1. I got this at the first run. I think this is ok. It just say it needs time to load the model.

    API Key error! Status code: 503 Response: {‘error’: ‘Model sentence-transformers/all-mpnet-base-v2 is currently loading’, ‘estimated_time’: 20.0}

  2. After this, I run the code again. This time there is no error in loading the embedding but I got the error from the huggingface embedder model.I used the StringKnowledgeSource as the knowledgesource.

    from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
        content = "Users name is John. He is 30 years old and lives in San Francisco."
            string_source = StringKnowledgeSource( content=content)
    

    [2025-01-19 14:38:23][ERROR]: Failed to upsert documents: Expected embeddings to be a list of floats or ints, a list of lists, a numpy array, or a list of numpy arrays, got {‘error’: [“Input should be a valid dictionary or instance of SentenceSimilarityInputsCheck: received ['Users name is John. He is 30 years old and lives in San Francisco.'] in parameters”]} in upsert.

    3.This is the error I got when I run the code for the thrid time.

    [2025-01-19 15:10:03][ERROR]: Failed to upsert documents: Expected embeddings to be a list of floats or ints, a list of lists, a numpy array, or a list of numpy arrays, got {‘error’: ‘Please log in or use a HF access token’} in upsert.

Now I don’t know how to move past this. And one more thing, why can’t we use the same embedder for the short term and entity memory?