NVidiaEmbedding from a local docker container

Hello everyone,

I’m running an NVidia embedding model locally using Docker. I’ve confirmed it’s running and processing sentences correctly via curl.

However, I’ve been struggling to set up a custom embedder to use this local service.

I’ve tried using “openai” as a provider, since NVidia mentions compatibility, I set the base_url to localhost and used a null OpenAI API Key, and it didn’t work. Tried many options in the URL, the same I use with curl, but nothing happens.

I’ve also tried using langchain NVidiaEmbeddings but the embedder seems to only accept a dictionary as input and I don’t understand exactly how I could pass the NVidiaEmbedding model to the embedder.

Does anyone have insights or suggestions on how to properly configure this? Any help would be greatly appreciated!