pgAssistant helps you design your schema, understand and optimize PostgreSQL database performance.
- ✨ Open-source
- 🐘 Built for PostgreSQL
👉 Explore the documentation, the posts or GitHub repo
👉If you are looking for good tools to work with postgres, you should have a look on this
Before you begin#
Read this documentation :
While pgAssistant is focused on helping you optimize SQL query performance, it is not intended for real-time PostgreSQL monitoring.
If you’re looking to monitor performance metrics, track query evolution over time, or get deep visibility into your PostgreSQL instance, we highly recommend the open-source tool pgWatch.
✨ Why pgWatch?
- Built with Go → fast and efficient
- Docker-based deployment → quick and easy to set up
- Uses industry standards like Grafana and Prometheus for beautiful dashboards and alerting.
pgWatch is a great complement to pgAssistant — one helps you improve your queries, the other helps you monitor them in action.
How to do it pg_stat_statements is required by pgAssistant. If you are not familiar with this module, you can find here the official Postgres documentation.
To enable this module, add this option on the command that runs Posgres :
shared_preload_libraries='pg_stat_statements' Then, connect to the database and run this command :
CREATE EXTENSION IF NOT EXISTS pg_stat_statements; (When you run pgAssistant, he will execute this SQL Statements)
If you run Postgresql in a docker environment Here is a sample docker-compose file that enables the module :
...
Install python modules via VENV
$ virtualenv env $ source env/bin/activate $ pip3 install -r requirements.txt Set Up Flask Environment
$ export FLASK_APP=run.py Start the app
$ flask run // OR $ flask run --cert=adhoc # For HTTPS server At this point, the app runs at http://127.0.0.1:5000/database.html.
Before you begin You must enable the pg_stat_statements module on your postgres database. Here is a documentation
Using the NexSol Technologies docker file Here is a sample docker-compose.yml file to run pgassistant :
services: pgassistant: image: nexsoltech/pgassistant:latest restart: always environment: - OPENAI_API_KEY=nothing - OPENAI_API_MODEL=codestral:latest - LOCAL_LLM_URI=http://host.docker.internal:11434/v1/ - SECRET_KEY=mySecretKey4PgAssistant ports: - "8080:5005" volumes: - ./myqueries.json:/home/pgassistant/myqueries.json The file myqueries.json is not necessary to run pgAssistant, but it should be usefull. Please read the doc here
Envrionment variables Variable Description Example value OPENAI_API_KEY Dummy key (required by clients expecting a token) nothing OPENAI_API_MODEL Model identifier to use with the API codestral:latest or mistral:latest LOCAL_LLM_URI Local endpoint URL for the OpenAI-compatible API http://host.docker.internal:11434/v1/ SECRET_KEY Used to encrypt some htttp session variables. mySecretKey4PgAssistant Notes OPENAI_API_KEY is required by most clients but not used when querying local LLMs like Ollama. You can set it to any placeholder (e.g. nothing). OPENAI_API_MODEL must match the model name loaded in Ollama (e.g. codestral, llama3, mistral, etc.). LOCAL_LLM_URI should point to the Ollama server, accessible from inside your Docker container via host.docker.internal. How to build your docker image Simply clone the repo and then build your own image like this :
...
What is all about When you create a foreign key, you should always create an index on it (and NO, postgres do not create automaticly an index for you).
Should You Always Index Foreign Keys in PostgreSQL? Foreign keys are an essential part of relational database design. They ensure referential integrity by linking rows between tables. But when it comes to performance, many developers wonder:
“Should I always add an index to a foreign key column?”
...
myqueries.json file is used to store your helpfull queries.
Each querie you add to the json file can be searched and executed by pgAssistant.
The JSON format is very simple :
{ "id": "db_version", "description": "Database version", "category": "Database", "sql": "SHOW server_version;", "type": "select" "reference": "https://www.postgresql.org/docs/current/sql-show.html" } id A unique ID of the query description The description of your SQL query categorie A SQL category like Database, Issue, Table, Index or whatever you want sql The SQL query ended with a “;” reference An URL on the query documentation or your project documentation type 2 sql types are alowed select : performing a select param_query : a select query with parameters. Each parameter must be in the format $1, $2, etc.
Using LLM(s) with pgAssistant With pgAssistant, you can leverage a Large Language Model (LLM) to enhance your SQL analyses. Here are the available options:
Locally deployed model, such as Ollama. OpenAI: requires an OpenAI API key. Any other compatible model: by copying and pasting the prompt generated by pgAssistant. Environment Variable Configuration To enable integration with an LLM, configure the following environment variables:
Variable Description OPENAI_API_KEY OpenAI API key. Any value is required if using a locally deployed LLM. OPENAI_API_MODEL Model to use, such as o1-mini, codellama:13b, or llama3.2:latest. LOCAL_LLM_URI URI of your local LLM (e.g., http://localhost:11434). Leave blank for OpenAI. Sample usages With OpenAI OPENAI_API_KEY = myOpenAPIKey OPENAI_API_MODEL = gpt-4 With local Ollama OPENAI_API_KEY = peuimportemaisobligatoire OPENAI_API_MODEL = llama3.2:latest LOCAL_LLM_URI = http://localhost:11434 How does pgAssistant create a prompt to query LLMs? pgAssistant provides three pieces of information to the LLM model you have chosen:
...