Hi,
I’m using the WebSiteSearch Tool with the Ollama locally and config as below.
WebsiteSearchTool(
config=dict(
llm=dict(
provider=“ollama”,
config=dict(
model=“llama3.1:latest”,
base_url=“http://127.0.0.1:5089”,
# Additional configurations here
),
),
embedder=dict(
provider=“ollama”, # or openai, ollama, …
config=dict(
model=“nomic-embed-text:latest”,
base_url=“http://127.0.0.1:5089”,
),
)
)
)
And I get the following Error… *** ERROR MESSAGE ***
Error parsing LLM output, agent will retry: I did it wrong. Invalid Format: I missed the ‘Action:’ after ‘Thought:’. I will do right next, and don’t use a tool I have already used.
If you don’t need to use any more tools, you must give your best complete final answer, make sure it satisfies the expected criteria, use the EXACT format below:
Thought: I now can give a great answer
Final Answer: my best complete final answer to the task.
Error parsing LLM output, agent will retry: I did it wrong. Invalid Format: I missed the ‘Action Input:’ after ‘Action:’. I will do right next, and don’t use a tool I have already used.
Error parsing LLM output, agent will retry: I did it wrong. Invalid Format: I missed the ‘Action:’ after ‘Thought:’. I will do right next, and don’t use a tool I have already used.
If you don’t need to use any more tools, you must give your best complete final answer, make sure it satisfies the expected criteria, use the EXACT format below:
Thought: I now can give a great answer
Final Answer: my best complete final answer to the task.
** END OF ERROR MESSAGE **
Is there a System Prompt Template that I need to apply for Ollama/Llama3.1?
or is there something else I need to config to make it work with non-OpenAI.
Please advise
Thanks
David