Postgres' on-disk data format is stable on major releases, allowing you to upgrade the specific server version without fear of corrupting on-disk data.
In general, I think it is best to keep Postgres data in a numbered folder following the major release number. For example, in my main project folder I have `./var/data/postgres/9.5/pgdata` that is mounted inside the container as `/var/lib/postgres/pgdata` per the advice of the official Postgres docker page.
You can easily move this folder around between machines and start new Postgres containers of the same major version. In fact, this is the easiest way to make a backup of a database (and possibly even a running one). Simply tar/zip up the data directory.
# you need sudo here if you want to preserve ownership UIDs mkdir backup sudo tar czf ./backup/pg_9_5.tar.gz ./var/data/postgres/9.5/pgdata # open the backup cd backup sudo tar xzf pg_9_5.tar.gz
Note that using tar in this manner will result in the same data structure being created. In this case the whole `var/data/postgres/9.5/pgdata`. There are numerous options to strip off directories or add new ones.
Though I am sure it is best to stop the server when doing this, it can in fact be performed even with a running container (especially when it is low volume like a development environment). Starting a new container from a backup taken in this way is no different from recovering from a power loss or crash. It takes a little while for the transaction log to be replayed, but has never failed on me. (I'm not claiming this is safe, but it does work well for moving a development environment to a laptop for on the go work.)
Logical Backups (dumps)
Long term backups should always be done as a plain text SQL dump (or close to it). This compresses better, takes up less space (no indexes), and is more portable between major release versions than the file based backup.
Postgres has a couple of built in methods to dump specific databases, or even all of the databases and users in one go. The latter is obviously more useful when upgrading to a new major version.
While the prior file based method didn't necessarily have any real impact or requirements of Docker, this one will. Some sort of connection to the running Postgres server is required, so the easiest way is to execute the commands in the container already running Postgres.
# for the following I'm assuming a running container 'odoo_db_1' that has its main postgres user as 'odoo' docker exec odoo_db_1 pg_dumpall -U odoo > ./psql9_5.sql # or stream compressed docker exec odoo_db_1 pg_dumpall -U odoo | gzip -c > ./psql9_5.sql.gz
This runs the `pg_dumpall` program inside the container and redirects its output to a file on the host. Inspecting this file will prove that it has all of the schemas, users, and databases that the running Postgres server had.
Upgrading Between Major Versions
The above logical backup is really all we need to create a migration between two containers running different versions of Postgres.
You don't need to be running both at the same time, but if you can run them at the same time you can automate the exporting and importing a bit like so. It would seem to be possible to export and import at the same time via pipes, however I could never get it to work. (It would create databases, but fail to enter data due weird errors.)
Something like this has worked well for me in the past.
Start by duplicating your service description for your postgres container, and change the various 9.x to 9.x+1 like so...
version: '2' services: # original db: environment: - POSTGRES_PASSWORD=odoo_db_pass - POSTGRES_USER=odoo - PGDATA=/var/lib/postgresql/data/pgdata image: postgres:9.5 networks: - backend volumes: - "./var/data/postgres/9.5/pgdata:/var/lib/postgresql/data/pgdata" # new db_new: environment: - POSTGRES_PASSWORD=odoo_db_pass - POSTGRES_USER=odoo - PGDATA=/var/lib/postgresql/data/pgdata image: postgres:9.6 networks: - backend volumes: - "./var/data/postgres/9.6/pgdata:/var/lib/postgresql/data/pgdata"
Now either create a script like this, or copy/paste just the 'bulk' part of the script.
If you haven't seen the `docker-compose exec -T` before, the `-T` prevents the next lines from going into the container as standard input by not attaching a TTY.
#!/bin/bash docker-compose up --force-recreate -d db && docker-compose up --force-recreate -d db_new echo '...waiting for db to be ready' sleep 10 echo '...exporting' docker-compose exec -T db pg_dumpall -U odoo > ./var/data/postgres/9.6/pgdata/pg.sql stat ./var/data/postgres/9.6/pgdata/pg.sql echo '...migrating' docker-compose exec -T db_new bash -c "psql -U odoo -f \$PGDATA/pg.sql" | grep 'CREATE\|ERROR' echo '...removing' rm -f ./var/data/postgres/9.6/pgdata/pg.sql echo '...analyze' docker-compose exec -T db_new vacuumdb -a -z -U odoo echo '...done.'
With these steps completed, you can now shut both containers down, delete the original section and re-name the new section to just db and start everything up again.
For more information, and some useful flags, see this very helpful cheat sheet.