Hi there,
I am trying now with an embeeded using ollama.
I have Llama 3.1 and nomic-text for embedder
This is what I add to my PDFtoll
tool = PDFSearchTool(
config=dict(
llm=dict(
provider="ollama",# or google, openai, anthropic, llama2, ...
config=dict(
model="llama3.1",
temperature=0.0001,
base_url='http://127.0.0.1:11434',),),
embedder=dict(
provider="ollama",# or openai, ollama, ...
config=dict(
model="nomic-embed-text:latest",
base_url='http://127.0.0.1:11434',
# title="Embeddings",
),
),
)
)
I looked that no other LLM is called, specially OpenAI.
I keep having these toke problem from OpenAI, but I don’t want to use it. I have an account but I want to stay local for the moment.
return self._retry_request(
File "/home/ru/Documents/personal_agents/.venv/lib/python3.10/site-packages/openai/_base_client.py", line 1092, in _retry_request
return self._request(
File "/home/ru/Documents/personal_agents/.venv/lib/python3.10/site-packages/openai/_base_client.py", line 1058, in _request
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}`
can anyone help me? Thanks to all