Category Archives: Database

Postfix, Dovecot and PostGreSQL Setup

In trying to setup a simple mail server on Ubuntu for a couple of small virtual domains with limited users I came across a couple excellent articles, but they were written for use with MySQL.  I have a machine with limited resources running custom software using PostGreSQL.  Although flat files would have been acceptable, I didn’t have the same luck in breadth of documentation.  So, I decided to work with the excellent articles I found and adapt them to Postgres for my server.

The best guide I found was from Linode, and is geared to help customers setting up Email with Postfix, Dovecot, and MySQL on their excellent platform.  The second guide I found left a lot to be desired, but since it was Postgres specific I was able to take those pieces from it and adapt it to the Linode guide.  That guide was called How to setup a postfix email server with Dovecot and published as a community tutorial from DigitalOcean.  While DigitalOcean and Linode are tight competitors, Linode wins out for having the better mail server how-to.

I’m going to mainly show what changes need to be done to the Linode guide in order to make it work with Postgres.  Otherwise use that guide for complete instructions. I’ll assume you already have a Postgres server up and running on that machine.

Postgres – Creating the Database

Your first change will be under the “Creating the Database” heading.  Let’s start by using the DigitalOcean guide to create the user.  Let’s try and keep the Linode naming convention as much as possible though.

First follow the adjustments to the  /etc/postgresql/vers.ion/main/pg_ident.conf  file.

mailmap         dovecot                 mailuser
mailmap         postfix                 mailuser
mailmap         root                    mailuser

Reload postgres and setup the user.

# sudo -u postgres psql
postgres=# CREATE USER mailuser;
postgres=# GRANT CREATE ON SCHEMA public TO postgres;
postgres=# GRANT USAGE ON SCHEMA public TO postgres;
postgres=# CREATE DATABASE mailserver WITH OWNER mailuser;
postgres=# \q 
# sudo psql -U mailuser -d mailserver
postgres=# \c mailserver
CREATE SEQUENCE seq_mail_domain_id START 1;
CREATE SEQUENCE seq_mail_user_id START 1;
CREATE SEQUENCE seq_mail_alias_id START 1;

CREATE TABLE virtual_domains (
  domain_id INT2 NOT NULL DEFAULT nextval('seq_mail_domain_id'),
  domain_name varchar(50) NOT NULL,
  PRIMARY KEY (domain_id)

CREATE TABLE virtual_users (
  user_id INT2 NOT NULL DEFAULT nextval('seq_mail_user_id'),
  domain_id INT2 NOT NULL,
  password varchar(106) NOT NULL,
  email varchar(100) NOT NULL,
  PRIMARY KEY (user_id),
  FOREIGN KEY (domain_id) REFERENCES virtual_domains(domain_id) ON DELETE CASCADE

CREATE TABLE virtual_aliases (
  alias_id INT2 NOT NULL DEFAULT nextval('seq_mail_alias_id'),
  domain_id INT2 NOT NULL,
  source varchar(100) NOT NULL,
  destination varchar(100) NOT NULL,
  PRIMARY KEY (alias_id),
  FOREIGN KEY (domain_id) REFERENCES virtual_domains(domain_id) ON DELETE CASCADE

To add your domains to the database:

INSERT INTO virtual_domains (domain_name)

Do a simple select from the virtual_domains table and note the id of the domain you wish to add addresses for:

SELECT * FROM virtual_domains;

According to the postgres document we referenced earlier, you have to get the password from command line like so:

# doveadm pw -s sha512 -r 100
Enter new password: ...
Retype new password: ...

Now you need to add some email addresses like so:

INSERT INTO virtual_users
  ( domain_id, password , email)
  ( '1', '{SHA512}.............................................................==', ''),
  ( '1', '{SHA512}.............................................................==', '');

Now, if you created your user according to the digital ocean document there is no password.  You’ll have to add one for your user like this:

ALTER ROLE mailuser WITH PASSWORD 'supersecretPaSSvvord';

That should be it for your direct Postgres editing. You can follow all the testing they show for MySQL since it looks like standard SQL.


You can follow the Linode guide for postfix, only needing to make changes in step 8.  Just change in where they name their files by prepending mysql: to pgsql: and changing the filenames. Everything else there is the same.

  #Virtual domains, users, and aliases
  virtual_mailbox_domains = pgsql:/etc/postfix/
  virtual_mailbox_maps = pgsql:/etc/postfix/
  virtual_alias_maps = pgsql:/etc/postfix/

Watch for these filenames further down as you’ll have to remember the change.  The actualy content of the files including the queries themselves from steps 11 on are the same as for MySQL.

The Linode document had a great line to test your db connection in step 15 and on.  Issue this from the shell:

postmap -q pgsql:/etc/postfix/


Keep in mind when working with dovecot configuration that if you don’t intend to use pop3, don’t enable it.

In step 25 specify pgsql instead of mysql in the driver line.

In step 27, Note in the dovecot-sql.conf.ext file your default_pass_scheme using the method above from the DigitalOcean document is SHA512,  not SHA512-CRYPT as in the Linode document.


That’s it! Everything else should be the same.  Follow the Linode guide to test your connections, make sure your firewall is open, and if you have any problems they have an excellent troubleshooting guide linked from the original article.  Enjoy!

Get Top URLs in SGUIL (with SecurityOnion Example)

Here’s a quick way to get a list of the top URLs used in a system that is monitoring traffic with SGUIL.  Login to the MySQL database and query the “event” table for signatures that have “URL” in them.  The example below is for SecurityOnion.

# In mysql, gets the top urls

# -A turns off "reading table information for completion of table and column names" for faster DB selection

mysql -uroot -A

use securityonion_db;

# Change date in the WHERE clause and number in LIMIT
# This query below retrieves the top 100 URLS after the date specified
SELECT event.signature,
FROM event
WHERE event.timestamp > '2013-03-01'
AND event.signature LIKE '%URL%'
GROUP BY event.signature
ORDER By count(*) desc
LIMIT 100;

This is a good quick way to find out what people are requesting.  However, because so many CDN’s, ad servers and trackers use multiple hostnames, you don’t get the big picture of what is coming from just the domain itself.  This is great if you are retrieving statistics to help you tune performance on your ad blocker or web proxy cache.

MySQL too many connections – solution

When greeted with a MySQL error this morning stating that a database connection could not be made from our PHP web application I had to do some testing.  First, try connecting to the database.  I did this from a remote host and was thrown back an “Error: too many connections.”

From there you should login as root from the localhost and issue:


This will give a list of the connections. Take a good look at the big offenders because you’ll need to troubleshoot that application later and find out why it has so many connections.  Now that you know the problem, restart your MySQL and connections should resume as normal (with the excessive stale connections released) and you can go about the repair.

Also, in your my.cnf you can reduce the wait_timeout which will reduce the amount of time a connection can be held open by an application. The default is 28800 seconds, or 8 hours. I reduced mine to 30 minutes. You could go less but we have some apps that do their own connection pooling and don’t want to mess that up. In my.cnf:

wait_timeout = 1800

If you are absolutely sure you need more connections, you can increase this from the default 151 in your my.cnf as well:

# default max_connections is 151 (1 spare for super user)
max_connections = 201

Now you need to keep an eye on the situation by issuing the SHOW FULL PROCESSLIST once in a while to see if they are filling up fast. And don’t forget to visit any applications that may be showing up in there frequently.  I found two applications tha were using persistent connections, being held open for two hours, and they just didn’t need to.

How To Disable WordPress Plugins Manually

If you get stuck being unable to login to WordPress because of a misbehaving plugin, you have a couple options.

You can login to your WordPress database using command line MySQL client or phpMyAdmin, and from there look for the wp_options table. If you are using a security plugin that changes table prefixes you’ll have to replace wp_ with your prefix. Once there, look for a value in the option_name column called active_plugins. You can find it like this:

SELECT * FROM wp_options WHERE option_name like '%plugin%';

This will give you a list of plugin options which you can examine. If you are using WPMU or WP3 Multisite and this is a globally enabled plugin, you’ll need to then use table name with the ID of the site in the prefix, like wp_2_options instead of wp_options. It used to be that option_name “active_plugins” would contain all plugins, but it appears from examining this that option_name “transient_plugin_slugs” contains plugins. If you choose to modify it, copy and paste the original value somewhere for easy editing and replacement once fixed.

The option I chose for this Multisite plugin that prevented login was to simply rename the folder of it in the plugins directory. That would cause WordPress not to load the bad files, because they were no longer in the expected location.

Once into the admin dashboard page, you can then disable the plugin completely and take whatever actions you need to repair, replace or trash it for good.

Mysql Client Switch for Vertical Output

Did you ever need to examine one line from a MySQL table and were unable to line up the column headers with the data because the line was too long?

Well no fears, it turns out the MySQL client has a switch (\G) you can use to arrange output vertically for easier reading. This is very handy when you are looking at only one line. Here’s an example:

mysql> SELECT * FROM emp_vw WHERE id = 10110 LIMIT 1 \G
*************************** 1. row ***************************
id: 10110
company_id: 21100
last_name: Stall
first_name: John
gender: MALE
address: 277 142nd Ave
city: New York
state: NY
phone: (888)555-1212
birthday: 1963-05-12
active: 1
timestamp: 2011-11-30 18:58:50
1 row in set (0.01 sec)

This is a great way to examine the content of the fields of any table for the purposes of investigation. It works very well for views and tables with many columns that form long lines when displayed horizontally. This can also be applied to your my.cnf file, but I prefer to leave it off so it can be enable with the MySQL client \G switch only when needed.

MySQL ERROR 1396 (HY000): Operation CREATE USER failed

While creating a new user in MySQL today I accidentally pressed the return button while in the middle of entering a password via the command line client, which looked like this:

mysql> CREATE USER 'userguy'@'' IDENTIFIED BY '
'> ;
'> '''';;
'> ';
Query OK, 0 rows affected (0.00 sec)

Which isn’t what I wanted. Terminating the command didn’t work and the command succeeded, not what I wanted. So I had to fix it.

I issued a


command and was greeted with:

mysql> drop user userguy;
ERROR 1396 (HY000): Operation DROP USER failed for 'userguy'@'%'

That didn’t work either. Instead I did this:

delete from mysql.user where User = 'userguy';
delete from mysql.db where User = 'userguy';

And when I tried to create the user again I was still met with ERROR 1396.

How to solve MySQL ERROR 1396 (HY000): Operation CREATE USER failed

Once those user entries are removed from the User and db tables, you just have to flush privileges:

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

Now when you enter your CREATE USER command (with the correct syntax and a valid single line password), it should succeed.

And stop working when you are tired!

Upgrading PostgreSQL to a new major version

Upgrading PostgreSQL to a major version can be a challenge but isn’t difficult. You need to plan ahead and do some testing afterward, but PostgreSQL includes everything you need to make the change almost seamlessly. I give you a step by step rundown of how to make the upgrade and gain all the benefits that come with a new Postgres version.

Upgrading PostgreSQL

Upgrading PostgreSQL need not be hard. The creators have provided all facilities you need to upgrade nearly painlessly, and depending on the extent of your use, nearly seamlessly.

These notes were made when I was upgrading from PostgreSQL 8.2.6 to 8.3.0, but the information should apply equally to other versions. Keep in mind the version number are specific to these releases, and to substitute your own accordingly.

According to the PostgreSQL INSTALL doc, here’s their order of install (not upgrade), but the procedure is almost identical with the addition of backing up and restoring your old data.

gmake install
adduser postgres *for new install only
mkdir /usr/local/pgsql/data
chown postgres /usr/local/pgsql/data
su - postgres
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data >logfile 2>&1 &
/usr/local/pgsql/bin/createdb test
/usr/local/pgsql/bin/psql test


First, run configure. I changed the install path to reflect the version so that I can run the two servers in parallel.
./configure --prefix=/usr/local/postgres-8.3.0


Watch for errors. Run gmake, not make. Most systems have gmake as a symlink to a capable compiler. Pretty straight forward. The email list is there to help if you have problems with compilation, which is usually rare. Googling for particular errors is also helpful, and will reveal a fix if you’ve encountered a bug or platform specific problem.

gmake install OR gmake install-strip

If you are not using the --enable-debug option then consider using gmake install-strip when installing. This should reduce the file size of the binary files and libraries as well as save some memory while loading. I use this on any non-development machine.

Make the /data directory

Straight forward here. Create the directory and initialize the new database cluster.

mkdir /usr/local/postgres-8.3.0/data
chown postgres /usr/local/postgres-8.3.0/data
su - postgres
/usr/local/postgres-8.3.0/bin/initdb -D /usr/local/postgres-8.3.0/data

While running initdb you’ll see:

The default text search configuration will be set to "english".
creating dictionaries ... ok

This is evidence of the brand new tsearch which is integrated into the core. Many people had upgrade problems with tsearch in the past, and for me tsearch is an important tool, especially needed for those people migrating from MSSQL, so it is a great step forward to see this included in the core now. Please see below if you were using the old tsearch contrib module.

Compile contrib modules

After you perform the install I would say pause here. Take a look at the contrib directory in the source. Are you using any contributed modules? If so, I would say this is a good place to compile and install them, before you go restoring your old database. If you don’t do this, and the contrib modules had you insert some SQL into your database, when you restore your dump the SQL will be present, but the module won’t, and you will get errors. Read the README file in the top /contrib directory for the procedure. It is usually gmake all;gmake install . Don’t do the regression tests now because you aren’t inserting the SQL code that goes with the modules yet. Be aware that regression tests try to connect to the default post, that means the original server, and will probably fail if you run the new server in parallel on a different port later on. From the manual for contrib modules:

After a major-version upgrade of PostgreSQL, run the installation script again, even though the module’s objects might have been brought forward from the old installation by dump and restore. This ensures that any new functions will be available and any needed corrections will be applied.

Also, if this is version >= 8.3.0 and you are using an older version of tsearch you’ll need to read the manual pages Migration from Pre-8.3 Text Search. Loading a pre-8.3 database that contained contrib/tsearch into an 8.3 will cause problems. It caused my restore to fail. The manual suggests loading contrib/tsearch before restore because it will prevent loading the old objects from the old database.

Edit postgres.conf and other configuration files

Do a diff on your postgres.conf and pg_hba.conf files with the new version. If moving between major versions I would edit them by hand. When moving between minor versions last time I just copied the old over the new as suggested, and found errors in the log at startup because a configuration option had been removed and I was still calling it.

Backup your old database

Seems like an odd place to be performing the backup procedure doesn’t it? Yes, but the PostgreSQL manual has a good reason for it:

To make the backup, you can use the pg_dumpall command from the version you are currently running. For best results, however, try to use the pg_dumpall command from PostgreSQL 8.3.0, since this version contains bug fixes and improvements over older versions. While this advice might seem idiosyncratic since you haven’t installed the new version yet, it is advisable to follow it if you plan to install the new version in parallel with the old version. In that case you can complete the installation normally and transfer the data later. This will also decrease the downtime.

The reason here is that the pg_dump or pg_dumpall binary from the new release has bug fixes over the old version. Now it makes sense. With that in mind I backed up to a file first, just as a precaution.

/usr/local/postgres-8.3.0/bin/pg_dumpall -p 5432 -U postgres | bzip2 -zv --best > pre_8.3_dump.sql.bz2

This command is using the NEW version of the pg_dumpall binary (8.3.0) and telling it to run on the currently running PostgreSQL backend (8.2.6) which is on port 5432, and store the output in a bzipped filed in the current directory.

Start new PostgreSQL server on a different port

su - postgres
/usr/local/postgres-8.3.0/bin/postgres -p 5543 -D /usr/local/postgres-8.3.0/data >logfile-8.3.0 2>&1 &

This starts the new server in parallel to the old running at the same time by using port 5543 for restore and testing. We’ll change it back to the default port once we know everything works right. Make sure you ps aux to see both servers working. The first time I did this the random port I chose for the new server was in use by something else and PostgreSQL refused to start.

Loading the data

There are a couple ways to do this. You can reload the dump that you just saved, or you can pipe the output of the pg_dumpall command into the new postmaster process that is running on an alternate port. I usually prefer to go with the pipe.

/usr/local/postgres-8.3.0/bin/pg_dumpall -p 5432 -U postgres | /usr/local/postgres-8.3.0/bin/psql -p 5543 -U postgres

This runs a dump on the old server (port 5432) and immediately restores it into the new server (port 5543) using a pipe to psql. Use the pg_dumpall and psql binaries from the new version you just compiled to gain bugfixes and performace benefits.

Remember after you load the old data into the new server to apply the SQL files for any contrib modules to your new database. You should also check the output of your server file (logfile-8.3.0) for any notices errors using a grep command like cat -n logfile-8.3.0 | grep -i error . Also vacuum the entire cluster after reloading:
/usr/local/pgsql-8.3.0/bin/vacuumdb -p 5543 -a -z

Switch servers

You could run the new server on the alternate port for a period of time with some of your test or staging applications to make sure everything is working properly and that any of the feature changes that may affect your software have time to be changed. If you do this, then just reinit the cluster and do the parallel dump again when you are ready. If you don’t you’ll lose any data inserted in that time – which could be days.

Otherwise, making the switch is simple. Shutdown both servers, and then restart the new server on the same port that the old one ran on. If you started the original PostgreSQL with an init script then shut it down using that init script, edit the script to refer to the new version, and use that init to start the new server with /etc/init.d/postgres start

Lastly, remember to change your environment to point to the new postgres version. This isn’t strictly required but if you use psql on the command line often you should set your PATH and MANPATH to refer to the correct location. If your postgres is installed in /usr/local/postgres-8.3.0 for example, you could use a symbolic link called /usr/local/pgsql that points to the currently installed PostgreSQL version. That way you won’t have to change profile, or setup scripts for web based interfaces, etc.

Upgrading PostgreSQL to a major version takes some time, but it isn’t difficult. If you employ this same procedure for your minor version upgrades you’ll find it become second nature. Practise makes perfect.