Before you begin

You must enable the pg_stat_statements module on your postgres database. Here is a documentation

Using the NexSol Technologies docker file

Here is a sample docker-compose.yml file to run pgassistant :

services:
  pgassistant:
    image: nexsoltech/pgassistant:latest
    restart: always
    environment:
      - OPENAI_API_KEY=nothing
      - OPENAI_API_MODEL=codestral:latest
      - LOCAL_LLM_URI=http://host.docker.internal:11434/v1/
      - SECRET_KEY=mySecretKey4PgAssistant
    ports:
      - "8080:5005"
    volumes:
      - ./myqueries.json:/home/pgassistant/myqueries.json

The file myqueries.json is not necessary to run pgAssistant, but it should be usefull. Please read the doc here

Envrionment variables

VariableDescriptionExample value
OPENAI_API_KEYDummy key (required by clients expecting a token)nothing
OPENAI_API_MODELModel identifier to use with the APIcodestral:latest or mistral:latest
LOCAL_LLM_URILocal endpoint URL for the OpenAI-compatible APIhttp://host.docker.internal:11434/v1/
SECRET_KEYUsed to encrypt some htttp session variables.mySecretKey4PgAssistant

Notes

  • OPENAI_API_KEY is required by most clients but not used when querying local LLMs like Ollama. You can set it to any placeholder (e.g. nothing).
  • OPENAI_API_MODEL must match the model name loaded in Ollama (e.g. codestral, llama3, mistral, etc.).
  • LOCAL_LLM_URI should point to the Ollama server, accessible from inside your Docker container via host.docker.internal.

How to build your docker image

Simply clone the repo and then build your own image like this :

git clone https://github.com/nexsol-technologies/pgassistant.git
cd pgassistant
docker build . -t mypgassistant:1.0