Embbeder insist in calling OpenAI

Hi there,
I am trying now with an embeeded using ollama.
I have Llama 3.1 and nomic-text for embedder

This is what I add to my PDFtoll

tool = PDFSearchTool(
    config=dict(
        llm=dict(
            provider="ollama",# or google, openai, anthropic, llama2, ...
            config=dict(
                model="llama3.1",
                temperature=0.0001,
                base_url='http://127.0.0.1:11434',),),
        embedder=dict(
            provider="ollama",# or openai, ollama, ...
            config=dict(
                model="nomic-embed-text:latest",
                base_url='http://127.0.0.1:11434',
                # title="Embeddings",
            ),
        ),
    )
)

I looked that no other LLM is called, specially OpenAI.

I keep having these toke problem from OpenAI, but I don’t want to use it. I have an account but I want to stay local for the moment.

  return self._retry_request(
  File "/home/ru/Documents/personal_agents/.venv/lib/python3.10/site-packages/openai/_base_client.py", line 1092, in _retry_request
    return self._request(
  File "/home/ru/Documents/personal_agents/.venv/lib/python3.10/site-packages/openai/_base_client.py", line 1058, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}`

can anyone help me? Thanks to all

My mistake, found the issue. It’s the memory embedding.
Many thanks

Hi I am trying to use text embedding other than ada02. can you please help me? I specified the model in DirectorySearchTool but it is not working

Hi there Shusanket,

Can you show the code?
As I have seen, the tools like more the absolute paths and not relative ones.

Let me know

policiesRagtool = DirectorySearchTool(
config=dict(
llm=dict(
provider=“openai”, # Options include ollama, google, anthropic, llama2, and more
config=dict(
model=“gpt-4o”,
# Additional configurations here
),
),
embedder=dict(
provider=“openai”, # or openai, ollama, …
config=dict(
model=“text-embedding-3-small”,
# task_type=“retrieval_document”,
# title=“Embeddings”,
),
),
),
directory = dir
) here is the code, thank you

What’s not working? What error do you get?

“”“WARNING:chromadb.segment.impl.vector.local_persistent_hnsw:Number of requested results 3 is greater than number of elements in index 1, updating n_results = 1"”" and it is not retrieving any info from the directory

now even while using the default text embedder it is not generating any text embedding. the process gets finished at 0.
here is the code;
policiesRagtool = DirectorySearchTool(directory=dir)