Initial Commit

This commit is contained in:
Anass Ahmed
2017-08-01 12:28:14 +02:00
commit be9b13a2c3
8 changed files with 1058 additions and 0 deletions

116
README.md Normal file
View File

@@ -0,0 +1,116 @@
# Bucardo Odoo Replication
This repository is an experiment to replicate Odoo in multi-branches super
market (using Odoo PoS) with Bucardo (Multi-master replication).
## What's the use case of odoo replication?
The use case is pretty simple, we have a main server (Odoo & PostgreSQL)
installed in the main datacenter of the company, and we have branches all over
the world that work on the PoS module but they don't have a stable connection
to the main server all the time (connection may go away for several days) so
we need to install Odoo & PostgreSQL servers for each branch to be able to
continue working without interruptions and then sync back to the master Odoo &
PostgreSQL servers automatically when the connection gets back online.
## How to use this repo?
To bring the cluster/stack up, just execute the script:
```bash
$ ./setup.sh
```
which will build the needed docker images and deploy them according to the
docker-compose.yml file. you need to have `docker` and `docker-compose` in your
`$PATH` before executing this shell script.
The script will also create a testing database with PoS module on it with demo
data, and replicate it to 2 branches with `pg_basebackup` and then kicks the
bucardo replication.
- The main server should be browsable on the URL: http://localhost:8010
- The branch1 server should be browsable on the URL: http://localhost:8020
- The branch2 server should be browsable on the URL: http://localhost:8030
## How to test?
Start by testing something basic like adding, deleting, and modifying a bunch
of users on different instances (main, branch1, branch2) and observe the sync
on the other instances.
Then, create new Point of Sales for the branches in the Point of Sale app.
Notice that Bucardo doesn't replicate DDL on its own, so the newly created
PostgreSQL sequences should be replicated and added to bucardo manually first.
We need to stop the branches instances, then replicate the database again with
`pg_basebackup` and add the new sequences to bucardo, then restart it again.
```bash
$ docker-compose stop branch1_db branch2_db
$ docker-compose run --rm --entry-point='/bin/bash -c' --user=postgres \
branch1_db "rm -rf /var/lib/postgresql/data/*; \
PGPASSWORD=replica pg_basebackup -h master_db -U \
replica -D /var/lib/postgresql/data -P --xlog"
$ docker-compose start branch1_db branch2_db
$ docker-compose exec master_bucardo bucardo add all sequences master_db \
--herd=odoo
$ docker-compose exec master_bucardo bucardo restart
$ docker-compose exec master_bucardo bucardo status # check status
$ docker-compose exec master_bucardo bucardo list dbs # check db statuses
$ docker-compose exec master_bucardo bucardo list syncs # check sync statuses
```
After that, start a session for each branch on a different PoS (preferably with
different users) and start selling, and see the orders being created on the PoS
of each intance.
you can then disconnect one of these servers from the network (to see what
happens when it gets back online) or you can just stop the bucardo instance
altogether, to stop the replication from happening (either way, it's the same
result).
```bash
$ docker-compose stop master_bucardo
# go do some PoS orders on different instances, then get back to restart it
$ docker-compose start master_bucardo
```
## What's the end result of this experiment?
Odoo is not designed with multi-master replication or being a distributed
system in mind. It's - at the end of the day - an ancient typical monolithic
web application. So, you'll notice that problems with sequences will arise
(Bucardo for some reason doesn't synchronize them fast as the tables, or Odoo
maybe caches them somehow) as you'll get some unique primary key violations
because the PostgreSQL sequence gave you the same number the other instance
gave to the previous record.
Also, bucardo (with all of its conflict resolution strategies) will throw data
of one of the instances if they used the same primary key (say you have created
a user with ID 10 on branch1 and created a user with ID 10 on branch2 while
they're being disconnected from network, when they get back online and bucardo
starts to replicate them, it'll take one of the users and throw the other
one!!).
Add to that Bucardo doesn't discover DDL alterations or replicate it, so upon
any module installation, upgrade, or removal, and also PoS (or any
sequence-altering models) will need to be replicated manually (take down the
branches instances, replicate the master database, add the tables/sequences to
bucardo, and restart the branches instances and bucardo server).
Some of these problems can be solved using BDR (Bi-Directional Replication)
from 2ndQuadrant (Especially, the DDL replication thing), but modifications to
Odoo source code should be made to accept such replication (using UUIDs or
Global Sequences for primary keys instead of normal integer sequences will be
one step forward for example).
## What's other options for the setup.sh script?
You can pass `BUILD=true` to re-build images (if there's new changes on them),
and you can pass `DESTROY=true` to remove the old instances and start from
scratch. Also, the script can be debugged by passing the variable `DEBUG=true`
to see the executed commands alongside their output.
```bash
$ DEBUG=true BUILD=true DESTROY=true ./setup.sh
```

8
bucardo/Dockerfile Normal file
View File

@@ -0,0 +1,8 @@
FROM anassahmed/postgres:9.4
MAINTAINER Anass Ahmed <anass.1430@gmail.com>
RUN apt update && apt install -y bucardo
COPY start.sh /
ENTRYPOINT ["./start.sh"]

40
bucardo/start.sh Executable file
View File

@@ -0,0 +1,40 @@
#!/bin/bash
set -ex
mkdir -p /var/lib/bucardo
mkdir -p /var/run/bucardo
chown bucardo: /var/lib/bucardo /var/run/bucardo
mkdir -p "${PGDATA}"
chown -R postgres: "${PGDATA}"
chmod 700 "${PGDATA}"
if [ ! -s "$PGDATA/PG_VERSION" ]; then
su -l postgres -c "/usr/lib/postgresql/9.4/bin/pg_ctl -D ${PGDATA} initdb"
su -l postgres -c "/usr/lib/postgresql/9.4/bin/pg_ctl -D ${PGDATA} start"
sleep 5
bucardo install --batch
su -l postgres -c "/usr/lib/postgresql/9.4/bin/pg_ctl -D ${PGDATA} -m fast stop"
fi
stop() {
bucardo stop
su -l postgres -c "/usr/lib/postgresql/9.4/bin/pg_ctl -D ${PGDATA} -m \
fast stop"
}
start() {
su -l postgres -c "/usr/lib/postgresql/9.4/bin/pg_ctl -D ${PGDATA} start"
bucardo start
}
start
trap "stop" SIGTERM
trap "stop" SIGINT
trap "stop; start" SIGHUP
while true; do
tail -f /var/log/bucardo/log.bucardo && wait ${!}
done

101
docker-compose.yml Normal file
View File

@@ -0,0 +1,101 @@
version: '2'
services:
master_db:
image: anassahmed/postgres:9.4
build: postgres
environment:
- POSTGRES_USER=odoo
- POSTGRES_PASSWORD=odoo
networks:
master_network:
aliases:
- db
branch1_network:
aliases:
- master_db
branch2_network:
aliases:
- master_db
master_odoo:
image: docker.io/library/odoo:10.0
environment:
- POSTGRES_USER=odoo
- POSTGRES_PASSWORD=odoo
networks:
- master_network
ports:
- "8010:8069"
volumes:
- odoo_data:/var/lib/odoo
master_bucardo:
image: anassahmed/bucardo:9.4
build: bucardo
environment:
- BUCARDO_HOST=master_db
- BUCARDO_PORT=5432
- BUCARDO_DB=bucardo
- BUCARDO_USER=bucardo
- BUCARDO_PASSWORD=bucardo
networks:
master_network:
branch1_network:
branch2_network:
branch1_db:
image: anassahmed/postgres:9.4
build: postgres
environment:
- POSTGRES_USER=odoo
- POSTGRES_PASSWORD=odoo
networks:
master_network:
branch1_network:
aliases:
- db
volumes:
- branch1_db_data:/var/lib/postgresql/data:rw
branch1_odoo:
image: docker.io/library/odoo:10.0
environment:
- POSTGRES_USER=odoo
- POSTGRES_PASSWORD=odoo
networks:
- branch1_network
ports:
- "8020:8069"
volumes:
- odoo_data:/var/lib/odoo
branch2_db:
image: anassahmed/postgres:9.4
build: postgres
environment:
- POSTGRES_USER=odoo
- POSTGRES_PASSWORD=odoo
networks:
master_network:
branch2_network:
aliases:
- db
volumes:
- branch2_db_data:/var/lib/postgresql/data:rw
branch2_odoo:
image: docker.io/library/odoo:10.0
environment:
- POSTGRES_USER=odoo
- POSTGRES_PASSWORD=odoo
networks:
- branch2_network
ports:
- "8030:8069"
volumes:
- odoo_data:/var/lib/odoo
networks:
master_network:
branch1_network:
branch2_network:
volumes:
branch1_db_data:
branch2_db_data:
odoo_data:

96
pg_hba.conf Normal file
View File

@@ -0,0 +1,96 @@
# PostgreSQL Client Authentication Configuration File
# ===================================================
#
# Refer to the "Client Authentication" section in the PostgreSQL
# documentation for a complete description of this file. A short
# synopsis follows.
#
# This file controls: which hosts are allowed to connect, how clients
# are authenticated, which PostgreSQL user names they can use, which
# databases they can access. Records take one of these forms:
#
# local DATABASE USER METHOD [OPTIONS]
# host DATABASE USER ADDRESS METHOD [OPTIONS]
# hostssl DATABASE USER ADDRESS METHOD [OPTIONS]
# hostnossl DATABASE USER ADDRESS METHOD [OPTIONS]
#
# (The uppercase items must be replaced by actual values.)
#
# The first field is the connection type: "local" is a Unix-domain
# socket, "host" is either a plain or SSL-encrypted TCP/IP socket,
# "hostssl" is an SSL-encrypted TCP/IP socket, and "hostnossl" is a
# plain TCP/IP socket.
#
# DATABASE can be "all", "sameuser", "samerole", "replication", a
# database name, or a comma-separated list thereof. The "all"
# keyword does not match "replication". Access to replication
# must be enabled in a separate record (see example below).
#
# USER can be "all", a user name, a group name prefixed with "+", or a
# comma-separated list thereof. In both the DATABASE and USER fields
# you can also write a file name prefixed with "@" to include names
# from a separate file.
#
# ADDRESS specifies the set of hosts the record matches. It can be a
# host name, or it is made up of an IP address and a CIDR mask that is
# an integer (between 0 and 32 (IPv4) or 128 (IPv6) inclusive) that
# specifies the number of significant bits in the mask. A host name
# that starts with a dot (.) matches a suffix of the actual host name.
# Alternatively, you can write an IP address and netmask in separate
# columns to specify the set of hosts. Instead of a CIDR-address, you
# can write "samehost" to match any of the server's own IP addresses,
# or "samenet" to match any address in any subnet that the server is
# directly connected to.
#
# METHOD can be "trust", "reject", "md5", "password", "gss", "sspi",
# "ident", "peer", "pam", "ldap", "radius" or "cert". Note that
# "password" sends passwords in clear text; "md5" is preferred since
# it sends encrypted passwords.
#
# OPTIONS are a set of options for the authentication in the format
# NAME=VALUE. The available options depend on the different
# authentication methods -- refer to the "Client Authentication"
# section in the documentation for a list of which options are
# available for which authentication methods.
#
# Database and user names containing spaces, commas, quotes and other
# special characters must be quoted. Quoting one of the keywords
# "all", "sameuser", "samerole" or "replication" makes the name lose
# its special character, and just match a database or username with
# that name.
#
# This file is read on server startup and when the postmaster receives
# a SIGHUP signal. If you edit the file on a running system, you have
# to SIGHUP the postmaster for the changes to take effect. You can
# use "pg_ctl reload" to do that.
# Put your actual configuration here
# ----------------------------------
#
# If you want to allow non-local connections, you need to add more
# "host" records. In that case you will also need to make PostgreSQL
# listen on a non-local interface via the listen_addresses
# configuration parameter, or via the -i or -h command line switches.
# CAUTION: Configuring the system for local "trust" authentication
# allows any local user to connect as any PostgreSQL user, including
# the database superuser. If you do not trust all your local users,
# use another authentication method.
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local replication postgres trust
#host replication postgres 127.0.0.1/32 trust
#host replication postgres ::1/128 trust
host replication replica all md5
host all all all md5

4
postgres/Dockerfile Normal file
View File

@@ -0,0 +1,4 @@
FROM docker.io/library/postgres:9.4
MAINTAINER Anass Ahmed <anass.1430@gmail.com>
RUN apt update && apt install -y postgresql-plperl-9.4

608
postgresql.conf Normal file
View File

@@ -0,0 +1,608 @@
# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# (The "=" is optional.) Whitespace may be used. Comments are introduced with
# "#" anywhere on a line. The complete list of parameter names and allowed
# values can be found in the PostgreSQL documentation.
#
# The commented-out settings shown in this file represent the default values.
# Re-commenting a setting is NOT sufficient to revert it to the default value;
# you need to reload the server.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal. If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, or use "pg_ctl reload". Some
# parameters, which are marked below, require a server shutdown and restart to
# take effect.
#
# Any parameter can also be given as a command-line option to the server, e.g.,
# "postgres -c log_connections=on". Some parameters can be changed at run time
# with the "SET" SQL command.
#
# Memory units: kB = kilobytes Time units: ms = milliseconds
# MB = megabytes s = seconds
# GB = gigabytes min = minutes
# TB = terabytes h = hours
# d = days
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
#data_directory = 'ConfigDir' # use data in another directory
# (change requires restart)
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
# (change requires restart)
#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file
# (change requires restart)
# If external_pid_file is not explicitly set, no extra PID file is written.
#external_pid_file = '' # write an extra PID file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
listen_addresses = '*'
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
#port = 5432 # (change requires restart)
max_connections = 100 # (change requires restart)
#superuser_reserved_connections = 3 # (change requires restart)
#unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories
# (change requires restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off # advertise server via Bonjour
# (change requires restart)
#bonjour_name = '' # defaults to the computer name
# (change requires restart)
# - Security and Authentication -
#authentication_timeout = 1min # 1s-600s
#ssl = off # (change requires restart)
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
# (change requires restart)
#ssl_prefer_server_ciphers = on # (change requires restart)
#ssl_ecdh_curve = 'prime256v1' # (change requires restart)
#ssl_renegotiation_limit = 0 # amount of data between renegotiations
#ssl_cert_file = 'server.crt' # (change requires restart)
#ssl_key_file = 'server.key' # (change requires restart)
#ssl_ca_file = '' # (change requires restart)
#ssl_crl_file = '' # (change requires restart)
#password_encryption = on
#db_user_namespace = off
# GSSAPI using Kerberos
#krb_server_keyfile = ''
#krb_caseins_users = off
# - TCP Keepalives -
# see "man 7 tcp" for details
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# 0 selects the system default
#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------
# - Memory -
shared_buffers = 128MB # min 128kB
# (change requires restart)
#huge_pages = try # on, off, or try
# (change requires restart)
#temp_buffers = 8MB # min 800kB
#max_prepared_transactions = 0 # zero disables the feature
# (change requires restart)
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
# you actively intend to use prepared transactions.
#work_mem = 4MB # min 64kB
#maintenance_work_mem = 64MB # min 1MB
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
#max_stack_depth = 2MB # min 100kB
dynamic_shared_memory_type = posix # the default is the first option
# supported by the operating system:
# posix
# sysv
# windows
# mmap
# use none to disable dynamic shared memory
# - Disk -
#temp_file_limit = -1 # limits per-session temp file space
# in kB, or -1 for no limit
# - Kernel Resource Usage -
#max_files_per_process = 1000 # min 25
# (change requires restart)
#shared_preload_libraries = '' # (change requires restart)
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0 # 0-100 milliseconds
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 10 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits
# - Background Writer -
#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/round
# - Asynchronous Behavior -
#effective_io_concurrency = 1 # 1-1000; 0 disables prefetching
#max_worker_processes = 8
#------------------------------------------------------------------------------
# WRITE AHEAD LOG
#------------------------------------------------------------------------------
# - Settings -
wal_level = logical # minimal, archive, hot_standby, or logical
# (change requires restart)
#fsync = on # turns forced synchronization on or off
#synchronous_commit = on # synchronization level;
# off, local, remote_write, or on
#wal_sync_method = fsync # the default is the first option
# supported by the operating system:
# open_datasync
# fdatasync (default on Linux)
# fsync
# fsync_writethrough
# open_sync
#full_page_writes = on # recover from partial page writes
#wal_log_hints = off # also do full page writes of non-critical updates
# (change requires restart)
#wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers
# (change requires restart)
#wal_writer_delay = 200ms # 1-10000 milliseconds
#commit_delay = 0 # range 0-100000, in microseconds
#commit_siblings = 5 # range 1-1000
# - Checkpoints -
#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each
#checkpoint_timeout = 5min # range 30s-1h
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
#checkpoint_warning = 30s # 0 disables
# - Archiving -
archive_mode = on # allows archiving to be done
# (change requires restart)
archive_command = 'cp %p /var/lib/postgresql/data/archive/%f' # command to use to archive a logfile segment
# placeholders: %p = path of file to archive
# %f = file name only
# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
#archive_timeout = 0 # force a logfile segment switch after this
# number of seconds; 0 disables
#------------------------------------------------------------------------------
# REPLICATION
#------------------------------------------------------------------------------
# - Sending Server(s) -
# Set these on the master and on any standby that will send replication data.
max_wal_senders = 10 # max number of walsender processes
# (change requires restart)
wal_keep_segments = 10 # in logfile segments, 16MB each; 0 disables
wal_sender_timeout = 60s # in milliseconds; 0 disables
max_replication_slots = 60 # max number of replication slots
# (change requires restart)
# - Master Server -
# These settings are ignored on a standby server.
#synchronous_standby_names = '' # standby servers that provide sync rep
# comma-separated list of application_name
# from standby(s); '*' = all
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed
# - Standby Servers -
# These settings are ignored on a master server.
hot_standby = on # "on" allows queries during recovery
# (change requires restart)
#max_standby_archive_delay = 30s # max delay before canceling queries
# when reading WAL from archive;
# -1 allows indefinite delay
#max_standby_streaming_delay = 30s # max delay before canceling queries
# when reading streaming WAL;
# -1 allows indefinite delay
#wal_receiver_status_interval = 10s # send replies at least this often
# 0 disables
hot_standby_feedback = on # send info from standby to prevent
# query conflicts
#wal_receiver_timeout = 60s # time that receiver waits for
# communication from master
# in milliseconds; 0 disables
#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------
# - Planner Method Configuration -
#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_indexonlyscan = on
#enable_material = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
# - Planner Cost Constants -
#seq_page_cost = 1.0 # measured on an arbitrary scale
#random_page_cost = 4.0 # same scale as above
#cpu_tuple_cost = 0.01 # same scale as above
#cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same scale as above
#effective_cache_size = 4GB
# - Genetic Query Optimizer -
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5 # range 1-10
#geqo_pool_size = 0 # selects default based on effort
#geqo_generations = 0 # selects default based on effort
#geqo_selection_bias = 2.0 # range 1.5-2.0
#geqo_seed = 0.0 # range 0.0-1.0
# - Other Planner Options -
#default_statistics_target = 100 # range 1-10000
#constraint_exclusion = partition # on, off, or partition
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
#from_collapse_limit = 8
#join_collapse_limit = 8 # 1 disables collapsing of explicit
# JOIN clauses
#------------------------------------------------------------------------------
# ERROR REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
#log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
#logging_collector = off # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
#log_directory = 'pg_log' # directory where log files are written,
# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
#log_truncate_on_rotation = off # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
#log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
#log_rotation_size = 10MB # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
# This is only relevant when logging to eventlog (win32):
#event_source = 'PostgreSQL'
# - When to Log -
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose messages
#log_hostname = off
#log_line_prefix = '' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %p = process ID
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
#log_statement = 'none' # none, ddl, mod, all
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
log_timezone = 'UTC'
#------------------------------------------------------------------------------
# RUNTIME STATISTICS
#------------------------------------------------------------------------------
# - Query/Index Statistics Collector -
#track_activities = on
#track_counts = on
#track_io_timing = off
#track_functions = none # none, pl, all
#track_activity_query_size = 1024 # (change requires restart)
#update_process_title = on
#stats_temp_directory = 'pg_stat_tmp'
# - Statistics Monitoring -
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off
#------------------------------------------------------------------------------
# AUTOVACUUM PARAMETERS
#------------------------------------------------------------------------------
#autovacuum = on # Enable autovacuum subprocess? 'on'
# requires track_counts to also be on.
#log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and
# their durations, > 0 logs only
# actions running at least this number
# of milliseconds.
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
# (change requires restart)
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates before
# vacuum
#autovacuum_analyze_threshold = 50 # min number of row updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
# (change requires restart)
#autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age
# before forced vacuum
# (change requires restart)
#autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
#------------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#------------------------------------------------------------------------------
# - Statement Behavior -
#search_path = '"$user",public' # schema names
#default_tablespace = '' # a tablespace name, '' uses the default
#temp_tablespaces = '' # a list of tablespace names, '' uses
# only default tablespace
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0 # in milliseconds, 0 is disabled
#lock_timeout = 0 # in milliseconds, 0 is disabled
#vacuum_freeze_min_age = 50000000
#vacuum_freeze_table_age = 150000000
#vacuum_multixact_freeze_min_age = 5000000
#vacuum_multixact_freeze_table_age = 150000000
#bytea_output = 'hex' # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
#gin_fuzzy_search_limit = 0
# - Locale and Formatting -
datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
timezone = 'UTC'
#timezone_abbreviations = 'Default' # Select the set of available time zone
# abbreviations. Currently, there are
# Default
# Australia (historical usage)
# India
# You can create your own file in
# share/timezonesets/.
#extra_float_digits = 0 # min -15, max 3
#client_encoding = sql_ascii # actually, defaults to database
# encoding
# These settings are initialized by initdb, but they can be changed.
lc_messages = 'en_US.utf8' # locale for system error message
# strings
lc_monetary = 'en_US.utf8' # locale for monetary formatting
lc_numeric = 'en_US.utf8' # locale for number formatting
lc_time = 'en_US.utf8' # locale for time formatting
# default configuration for text search
default_text_search_config = 'pg_catalog.english'
# - Other Defaults -
#dynamic_library_path = '$libdir'
#local_preload_libraries = ''
#session_preload_libraries = ''
#------------------------------------------------------------------------------
# LOCK MANAGEMENT
#------------------------------------------------------------------------------
#deadlock_timeout = 1s
#max_locks_per_transaction = 64 # min 10
# (change requires restart)
#max_pred_locks_per_transaction = 64 # min 10
# (change requires restart)
#------------------------------------------------------------------------------
# VERSION/PLATFORM COMPATIBILITY
#------------------------------------------------------------------------------
# - Previous PostgreSQL Versions -
#array_nulls = on
#backslash_quote = safe_encoding # on, off, or safe_encoding
#default_with_oids = off
#escape_string_warning = on
#lo_compat_privileges = off
#quote_all_identifiers = off
#sql_inheritance = on
#standard_conforming_strings = on
#synchronize_seqscans = on
# - Other Platforms and Clients -
#transform_null_equals = off
#------------------------------------------------------------------------------
# ERROR HANDLING
#------------------------------------------------------------------------------
#exit_on_error = off # terminate session on any error?
#restart_after_crash = on # reinitialize after backend crash?
#------------------------------------------------------------------------------
# CONFIG FILE INCLUDES
#------------------------------------------------------------------------------
# These options allow settings to be loaded from files other than the
# default postgresql.conf.
#include_dir = 'conf.d' # include files ending in '.conf' from
# directory 'conf.d'
#include_if_exists = 'exists.conf' # include file only if it exists
#include = 'special.conf' # include file
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
# Add settings for extensions here

85
setup.sh Executable file
View File

@@ -0,0 +1,85 @@
#!/bin/bash
#####################################################
# A script to setup bucardo replication environment #
#####################################################
set -e
if ${DEBUG:=false}; then set -x; fi
PROJECT_NAME="odoobucardoreplication"
DATABASE_OWNER="odoo"
DATABASE="odoo_test"
REPLICA_USER="replica"
REPLICA_PASSWORD="replica"
BUCARDO_USER="bucardo"
BUCARDO_PASSWORD="bucardo"
BRANCHES="branch1 branch2"
# build any changes in the docker images
if ${BUILD:=false}; then
docker-compose build
fi
# destroy the current deployed stack
if ${DESTROY:=false}; then
docker-compose down -v --remove-orphans
fi
# create and start the new stack
docker-compose up -d
sleep 10
# copy configurations to the main master server
docker cp postgresql.conf \
${PROJECT_NAME}_master_db_1:/var/lib/postgresql/data/
docker cp pg_hba.conf \
${PROJECT_NAME}_master_db_1:/var/lib/postgresql/data/
docker-compose exec --user=postgres master_db mkdir -p \
/var/lib/postgresql/data/archive
# reload configurations
docker-compose restart master_db
# add replica and bucardo users
docker-compose exec --user=postgres master_db psql -c \
"CREATE ROLE $REPLICA_USER WITH REPLICATION LOGIN ENCRYPTED PASSWORD \
'$REPLICA_PASSWORD'; \
CREATE ROLE $BUCARDO_USER WITH SUPERUSER LOGIN ENCRYPTED PASSWORD \
'$BUCARDO_PASSWORD'"
# install odoo database with point of sale module
docker-compose exec --user=postgres master_db psql -c \
"CREATE DATABASE $DATABASE WITH OWNER $DATABASE_OWNER;"
docker-compose run master_odoo -- -d $DATABASE -i point_of_sale --no-xmlrpc \
--stop-after-init
# add database, its tables, and its sequences to bucardo
docker-compose exec master_bucardo bucardo add db master_db dbname=$DATABASE \
host=master_db user=$BUCARDO_PASSWORD password=$BUCARDO_PASSWORD
docker-compose exec master_bucardo bucardo add all tables master_db \
--herd=odoo
docker-compose exec master_bucardo bucardo add all sequences master_db \
--herd=odoo
for REP in $BRANCHES
do
# copy a postgres PITR backup to branch database
docker-compose stop ${REP}_db
docker-compose run --rm --entrypoint='/bin/bash -c' --user=postgres \
${REP}_db "rm -rf /var/lib/postgresql/data/*; \
PGPASSWORD=$REPLICA_PASSWORD pg_basebackup -h master_db -U \
$REPLICA_USER -D /var/lib/postgresql/data -P --xlog"
# start the container again after copying the backup
docker-compose start ${REP}_db
sleep 10
# add the slave database to bucardo
docker-compose exec master_bucardo bucardo add db ${REP}_db \
dbname=$DATABASE host=${REP}_db user=$BUCARDO_PASSWORD \
password=$BUCARDO_PASSWORD
done
# add all branch databases in the sync
docker-compose exec master_bucardo bucardo add sync odoo relgroup=odoo \
dbs=master_db:source,branch1_db:source,branch2_db:source
# show bucardo status
docker-compose exec master_bucardo bucardo status
docker-compose exec master_bucardo bucardo list dbs
docker-compose exec master_bucardo bucardo list syncs