Tag Archives: postgres

Postfix, Dovecot and PostGreSQL Setup

In trying to setup a simple mail server on Ubuntu for a couple of small virtual domains with limited users I came across a couple excellent articles, but they were written for use with MySQL.  I have a machine with limited resources running custom software using PostGreSQL.  Although flat files would have been acceptable, I didn’t have the same luck in breadth of documentation.  So, I decided to work with the excellent articles I found and adapt them to Postgres for my server.

The best guide I found was from Linode, and is geared to help customers setting up Email with Postfix, Dovecot, and MySQL on their excellent platform.  The second guide I found left a lot to be desired, but since it was Postgres specific I was able to take those pieces from it and adapt it to the Linode guide.  That guide was called How to setup a postfix email server with Dovecot and published as a community tutorial from DigitalOcean.  While DigitalOcean and Linode are tight competitors, Linode wins out for having the better mail server how-to.

I’m going to mainly show what changes need to be done to the Linode guide in order to make it work with Postgres.  Otherwise use that guide for complete instructions. I’ll assume you already have a Postgres server up and running on that machine.

Postgres – Creating the Database

Your first change will be under the “Creating the Database” heading.  Let’s start by using the DigitalOcean guide to create the user.  Let’s try and keep the Linode naming convention as much as possible though.

First follow the adjustments to the  /etc/postgresql/vers.ion/main/pg_ident.conf  file.

mailmap         dovecot                 mailuser
mailmap         postfix                 mailuser
mailmap         root                    mailuser

Reload postgres and setup the user.

# sudo -u postgres psql
postgres=# CREATE USER mailuser;
postgres=# REVOKE CREATE ON SCHEMA public FROM PUBLIC;
postgres=# REVOKE USAGE ON SCHEMA public FROM PUBLIC;
postgres=# GRANT CREATE ON SCHEMA public TO postgres;
postgres=# GRANT USAGE ON SCHEMA public TO postgres;
postgres=# CREATE DATABASE mailserver WITH OWNER mailuser;
postgres=# \q 
# sudo psql -U mailuser -d mailserver
postgres=# \c mailserver
CREATE SEQUENCE seq_mail_domain_id START 1;
CREATE SEQUENCE seq_mail_user_id START 1;
CREATE SEQUENCE seq_mail_alias_id START 1;

CREATE TABLE virtual_domains (
  domain_id INT2 NOT NULL DEFAULT nextval('seq_mail_domain_id'),
  domain_name varchar(50) NOT NULL,
  PRIMARY KEY (domain_id)
);


CREATE TABLE virtual_users (
  user_id INT2 NOT NULL DEFAULT nextval('seq_mail_user_id'),
  domain_id INT2 NOT NULL,
  password varchar(106) NOT NULL,
  email varchar(100) NOT NULL,
  PRIMARY KEY (user_id),
  FOREIGN KEY (domain_id) REFERENCES virtual_domains(domain_id) ON DELETE CASCADE
);

CREATE TABLE virtual_aliases (
  alias_id INT2 NOT NULL DEFAULT nextval('seq_mail_alias_id'),
  domain_id INT2 NOT NULL,
  source varchar(100) NOT NULL,
  destination varchar(100) NOT NULL,
  PRIMARY KEY (alias_id),
  FOREIGN KEY (domain_id) REFERENCES virtual_domains(domain_id) ON DELETE CASCADE
);

To add your domains to the database:

INSERT INTO virtual_domains (domain_name)
VALUES
  ('example.com'),
  ('hostname.example.com'),
  ('hostname'),
  ('localhost.example.com');

Do a simple select from the virtual_domains table and note the id of the domain you wish to add addresses for:

SELECT * FROM virtual_domains;

According to the postgres document we referenced earlier, you have to get the password from command line like so:

# doveadm pw -s sha512 -r 100
Enter new password: ...
Retype new password: ...
{SHA512}.............................................................==

Now you need to add some email addresses like so:

INSERT INTO virtual_users
  ( domain_id, password , email)
VALUES
  ( '1', '{SHA512}.............................................................==', 'postmaster@example.com'),
  ( '1', '{SHA512}.............................................................==', 'supah_user@example.com');

Now, if you created your user according to the digital ocean document there is no password.  You’ll have to add one for your user like this:

ALTER ROLE mailuser WITH PASSWORD 'supersecretPaSSvvord';

That should be it for your direct Postgres editing. You can follow all the testing they show for MySQL since it looks like standard SQL.

Postfix

You can follow the Linode guide for postfix, only needing to make changes in step 8.  Just change in main.cf where they name their files by prepending mysql: to pgsql: and changing the filenames. Everything else there is the same.

  #Virtual domains, users, and aliases
  virtual_mailbox_domains = pgsql:/etc/postfix/pgsql-virtual-mailbox-domains.cf
  virtual_mailbox_maps = pgsql:/etc/postfix/pgsql-virtual-mailbox-maps.cf
  virtual_alias_maps = pgsql:/etc/postfix/pgsql-virtual-alias-maps.cf

Watch for these filenames further down as you’ll have to remember the change.  The actualy content of the files including the queries themselves from steps 11 on are the same as for MySQL.

The Linode document had a great line to test your db connection in step 15 and on.  Issue this from the shell:

postmap -q cmydomain.com pgsql:/etc/postfix/pgsql-virtual-mailbox-domains.cf

Dovecot

Keep in mind when working with dovecot configuration that if you don’t intend to use pop3, don’t enable it.

In step 25 specify pgsql instead of mysql in the driver line.

In step 27, Note in the dovecot-sql.conf.ext file your default_pass_scheme using the method above from the DigitalOcean document is SHA512,  not SHA512-CRYPT as in the Linode document.

Done!

That’s it! Everything else should be the same.  Follow the Linode guide to test your connections, make sure your firewall is open, and if you have any problems they have an excellent troubleshooting guide linked from the original article.  Enjoy!

Updating Drupal from 5 to 6 causes “The post could not be saved” – Postgres version fix

I recently upgraded several Drupal sites to 6.2 Some were 5.1, 5.2, and 6 previously. Upon completion of what I thought was a successul upgrade, they all suffered from the same common problem – trying to save a new post resulted in a “The post could not be saved” error. There were also various other database access errors in logs. On one site it was observed that after trying to make a post, Drupal would take it upon itself to take the site offline with the famous “Welcome to your new Drupal website” message. The problem there turned out to be a permission access problem, but only occurred after a failed post was attempted. To repair, go to admin/content/node-settings and select the “rebuild permissions” button. Once the permissions were repaired it was fine – until you made another post and then Drupal repeated the offline cycle and the beginning of page access SQL errors in the log again.

Fortunately there were some hints as to the problem, but the real problem is that they were mostly for MySQL, and I was using Postgres. Here’s my solution to the problem.

First, the most verbose error message I got was on this very site, where it looked like this after trying to post:

* warning: pg_query(): Query failed: ERROR: duplicate key value violates unique constraint “node_revisions_pkey” in /path/to/drupal-6.2/includes/database.pgsql.inc on line 139.
* user warning: query: INSERT INTO node_revisions (nid, uid, title, body, teaser, log, timestamp, format) VALUES (0, 1, ‘test’, ‘test test test test test test test test test test test vv’, ‘test test test test test test test test test test test test test vv’, ”, 1294612270, 4) in /path/to/drupal-6.2/includes/common.inc on line 3538.
* warning: pg_query(): Query failed: ERROR: duplicate key value violates unique constraint “node_vid_idx” in path/to/drupal-6.2/includes/database.pgsql.inc on line 139.
* user warning: query: INSERT INTO node (vid, type, language, title, uid, status, created, changed, comment, promote, moderate, sticky, tnid, translate) VALUES (0, ‘blog’, ”, ‘test’, 1, 1, 1294612270, 1294612270, 2, 1, 0, 0, 0, 0) in /path/to/drupal-6.2/includes/common.inc on line 3538.
* warning: pg_query(): Query failed: ERROR: duplicate key value violates unique constraint “node_comment_statistics_pkey” in /path/to/drupal-6.2/includes/database.pgsql.inc on line 139.
* user warning: query: INSERT INTO node_comment_statistics (nid, last_comment_timestamp, last_comment_name, last_comment_uid, comment_count) VALUES (0, 1294612270, NULL, 1, 0) in /path/to/drupal-6.2/modules/comment/comment.module on line 607.
* The post could not be saved.

If you are using postgres on the command line via the psql client, you can examine all sequences using the command

\ds

You’ll see that both node_nid_seq and node_revisions_vid_seq have been created. If you’ve tried creating a number of new pages or posts and failed, you’ll notice if you use nextval() like:

SELECT nextval('node_nid_seq');

It will be quite a bit advanced from your actual last successful post’s nid, which you can see like this:

SELECT max(nid) FROM node;

However, if you do this:

SELECT nextval('node_revisions_vid_seq');

Make a note of it. Then find your actual last node_revisions max vid like this:

SELECT max(vid) FROM node_revisions;

And you’ll see it is probably only one higher.

At this point, examine the node table like this:

\d node

                                  Table "public.node"
  Column   |          Type          |                     Modifiers                      
-----------+------------------------+----------------------------------------------------
 nid       | integer                | not null default nextval('node_nid_seq'::regclass)
 vid       | integer                | not null default 0
 type      | character varying(32)  | not null default ''::character varying
 uid       | integer                | not null default 0
 status    | integer                | not null default 1
 created   | integer                | not null default 0
 changed   | integer                | not null default 0
 comment   | integer                | not null default 0
 promote   | integer                | not null default 0
 moderate  | integer                | not null default 0
 sticky    | integer                | not null default 0
 mos_id    | integer                | 
 language  | character varying(12)  | not null default ''::character varying
 tnid      | integer                | not null default 0
 translate | integer                | not null default 0
 title     | character varying(255) | not null default ''::character varying

And you can correctly see under “Modifiers” that the default value for nid is the next value in the node_nid_seq, which is the equivalent of the more familiar MySQL AUTO_INCREMENT attribute.

Now try the same on node_revisions:

\d node_revisions


                       Table "public.node_revisions"
  Column   |          Type          |               Modifiers                
-----------+------------------------+----------------------------------------
 vid       | integer                | not null default 0
 uid       | integer                | not null default 0
 body      | text                   | not null
 teaser    | text                   | not null
 log       | text                   | not null
 timestamp | integer                | not null default 0
 format    | integer                | not null default 0
 nid       | integer                | not null default 0
 title     | character varying(255) | not null default ''::character varying

And you’ll see that the default value for vid is zero (0). Of course it isn’t auto-incrementing and so you are getting errors trying to write because you already have a vid with the value 0 in it:

SELECT min(vid) FROM node_revisions;
 min 
-----
   0
(1 row)

So what is the solution?

ALTER TABLE node_revisions ALTER COLUMN vid SET DEFAULT nextval('node_revisions_vid_seq'::regclass);

Double check to make sure it went through:

\d node_revisions
                                   Table "public.node_revisions"
  Column   |          Type          |                          Modifiers                           
-----------+------------------------+--------------------------------------------------------------
 vid       | integer                | not null default nextval('node_revisions_vid_seq'::regclass)
 uid       | integer                | not null default 0
 body      | text                   | not null
 teaser    | text                   | not null
 log       | text                   | not null
 timestamp | integer                | not null default 0
 format    | integer                | not null default 0
 nid       | integer                | not null default 0
 title     | character varying(255) | not null default ''::character varying

…And it looks good!

We almost switched to WordPress for one site because of this error, but thinking about converting 5 years and 1250 posts from PostgreSQL Drupal to MySQL WordPress didn’t sound like fun. Fortunately the fix was easy, but nailing down the problem took some time first. Hopefully this helped someone else.

Upgrading PostgreSQL to a new major version

Upgrading PostgreSQL to a major version can be a challenge but isn’t difficult. You need to plan ahead and do some testing afterward, but PostgreSQL includes everything you need to make the change almost seamlessly. I give you a step by step rundown of how to make the upgrade and gain all the benefits that come with a new Postgres version.

Upgrading PostgreSQL

Upgrading PostgreSQL need not be hard. The creators have provided all facilities you need to upgrade nearly painlessly, and depending on the extent of your use, nearly seamlessly.

These notes were made when I was upgrading from PostgreSQL 8.2.6 to 8.3.0, but the information should apply equally to other versions. Keep in mind the version number are specific to these releases, and to substitute your own accordingly.

According to the PostgreSQL INSTALL doc, here’s their order of install (not upgrade), but the procedure is almost identical with the addition of backing up and restoring your old data.

./configure
gmake
su
gmake install
adduser postgres *for new install only
mkdir /usr/local/pgsql/data
chown postgres /usr/local/pgsql/data
su - postgres
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data >logfile 2>&1 &
/usr/local/pgsql/bin/createdb test
/usr/local/pgsql/bin/psql test

configure

First, run configure. I changed the install path to reflect the version so that I can run the two servers in parallel.
./configure --prefix=/usr/local/postgres-8.3.0

gmake

Watch for errors. Run gmake, not make. Most systems have gmake as a symlink to a capable compiler. Pretty straight forward. The email list is there to help if you have problems with compilation, which is usually rare. Googling for particular errors is also helpful, and will reveal a fix if you’ve encountered a bug or platform specific problem.

gmake install OR gmake install-strip

If you are not using the --enable-debug option then consider using gmake install-strip when installing. This should reduce the file size of the binary files and libraries as well as save some memory while loading. I use this on any non-development machine.

Make the /data directory

Straight forward here. Create the directory and initialize the new database cluster.

mkdir /usr/local/postgres-8.3.0/data
chown postgres /usr/local/postgres-8.3.0/data
su - postgres
/usr/local/postgres-8.3.0/bin/initdb -D /usr/local/postgres-8.3.0/data

While running initdb you’ll see:

The default text search configuration will be set to "english".
creating dictionaries ... ok

This is evidence of the brand new tsearch which is integrated into the core. Many people had upgrade problems with tsearch in the past, and for me tsearch is an important tool, especially needed for those people migrating from MSSQL, so it is a great step forward to see this included in the core now. Please see below if you were using the old tsearch contrib module.

Compile contrib modules

After you perform the install I would say pause here. Take a look at the contrib directory in the source. Are you using any contributed modules? If so, I would say this is a good place to compile and install them, before you go restoring your old database. If you don’t do this, and the contrib modules had you insert some SQL into your database, when you restore your dump the SQL will be present, but the module won’t, and you will get errors. Read the README file in the top /contrib directory for the procedure. It is usually gmake all;gmake install . Don’t do the regression tests now because you aren’t inserting the SQL code that goes with the modules yet. Be aware that regression tests try to connect to the default post, that means the original server, and will probably fail if you run the new server in parallel on a different port later on. From the manual for contrib modules:

After a major-version upgrade of PostgreSQL, run the installation script again, even though the module’s objects might have been brought forward from the old installation by dump and restore. This ensures that any new functions will be available and any needed corrections will be applied.

Also, if this is version >= 8.3.0 and you are using an older version of tsearch you’ll need to read the manual pages Migration from Pre-8.3 Text Search. Loading a pre-8.3 database that contained contrib/tsearch into an 8.3 will cause problems. It caused my restore to fail. The manual suggests loading contrib/tsearch before restore because it will prevent loading the old objects from the old database.

Edit postgres.conf and other configuration files

Do a diff on your postgres.conf and pg_hba.conf files with the new version. If moving between major versions I would edit them by hand. When moving between minor versions last time I just copied the old over the new as suggested, and found errors in the log at startup because a configuration option had been removed and I was still calling it.

Backup your old database

Seems like an odd place to be performing the backup procedure doesn’t it? Yes, but the PostgreSQL manual has a good reason for it:

To make the backup, you can use the pg_dumpall command from the version you are currently running. For best results, however, try to use the pg_dumpall command from PostgreSQL 8.3.0, since this version contains bug fixes and improvements over older versions. While this advice might seem idiosyncratic since you haven’t installed the new version yet, it is advisable to follow it if you plan to install the new version in parallel with the old version. In that case you can complete the installation normally and transfer the data later. This will also decrease the downtime.

The reason here is that the pg_dump or pg_dumpall binary from the new release has bug fixes over the old version. Now it makes sense. With that in mind I backed up to a file first, just as a precaution.

/usr/local/postgres-8.3.0/bin/pg_dumpall -p 5432 -U postgres | bzip2 -zv --best > pre_8.3_dump.sql.bz2

This command is using the NEW version of the pg_dumpall binary (8.3.0) and telling it to run on the currently running PostgreSQL backend (8.2.6) which is on port 5432, and store the output in a bzipped filed in the current directory.

Start new PostgreSQL server on a different port


su - postgres
/usr/local/postgres-8.3.0/bin/postgres -p 5543 -D /usr/local/postgres-8.3.0/data >logfile-8.3.0 2>&1 &

This starts the new server in parallel to the old running at the same time by using port 5543 for restore and testing. We’ll change it back to the default port once we know everything works right. Make sure you ps aux to see both servers working. The first time I did this the random port I chose for the new server was in use by something else and PostgreSQL refused to start.

Loading the data

There are a couple ways to do this. You can reload the dump that you just saved, or you can pipe the output of the pg_dumpall command into the new postmaster process that is running on an alternate port. I usually prefer to go with the pipe.

/usr/local/postgres-8.3.0/bin/pg_dumpall -p 5432 -U postgres | /usr/local/postgres-8.3.0/bin/psql -p 5543 -U postgres

This runs a dump on the old server (port 5432) and immediately restores it into the new server (port 5543) using a pipe to psql. Use the pg_dumpall and psql binaries from the new version you just compiled to gain bugfixes and performace benefits.

Remember after you load the old data into the new server to apply the SQL files for any contrib modules to your new database. You should also check the output of your server file (logfile-8.3.0) for any notices errors using a grep command like cat -n logfile-8.3.0 | grep -i error . Also vacuum the entire cluster after reloading:
/usr/local/pgsql-8.3.0/bin/vacuumdb -p 5543 -a -z

Switch servers

You could run the new server on the alternate port for a period of time with some of your test or staging applications to make sure everything is working properly and that any of the feature changes that may affect your software have time to be changed. If you do this, then just reinit the cluster and do the parallel dump again when you are ready. If you don’t you’ll lose any data inserted in that time – which could be days.

Otherwise, making the switch is simple. Shutdown both servers, and then restart the new server on the same port that the old one ran on. If you started the original PostgreSQL with an init script then shut it down using that init script, edit the script to refer to the new version, and use that init to start the new server with /etc/init.d/postgres start

Lastly, remember to change your environment to point to the new postgres version. This isn’t strictly required but if you use psql on the command line often you should set your PATH and MANPATH to refer to the correct location. If your postgres is installed in /usr/local/postgres-8.3.0 for example, you could use a symbolic link called /usr/local/pgsql that points to the currently installed PostgreSQL version. That way you won’t have to change profile, or setup scripts for web based interfaces, etc.

Upgrading PostgreSQL to a major version takes some time, but it isn’t difficult. If you employ this same procedure for your minor version upgrades you’ll find it become second nature. Practise makes perfect.

Monitoring the Postgres Daemon

Following up on Running Postgres with Daemontools – Shutdown Errors, the Postgres mail list had some advice on monitoring and automatically restarting Postgres.

It seems the danger in automatically restarting the postmaster process from Postgres is a dangerous proposition. As mentioned in the mail list, if a process quits for whatever reason, daemontools will simply restart it. Now, that could theoretically fail and restart 3600 times per hour, every hour, every day without you ever being notified that there is a problem that requires attention. This may be fine for some things, but not Postgres.

Some people indicated they use a custom script to try and connect to postgres, and if the postgres fails they are notified. Some said the program monit will do the same job. I believe monit is quite extensible and will allow you to be notified as well as restart the server.

Still another person said they run the nagios monitoring program and are notified by email and pager of a failure.

Whatever the solution is, it seems only right to me that you should be notified of a failure if you so desire. At the same time, the mechanism that is handling the process should provide a clean shutdown. Apparently monit does that.

While I can’t advocate daemontools for Postgres, I would certainly say it is time to revisit nagios and monit. I’ve used nagio is the past but had resource issues with it. I’ve perused the monit documentation and enjoyed the flexibility. Now I’ll be looking at them both again and reporting back.

Running Postgres with Daemontools – Shutdown Errors

Running anything under daemontools seems like a great idea, but I’ve had a particularly bad time trying to shutdown postgres under daemontools while running Apache and PHP with persistent postgres connections.I had a VPS that was running postgres with daemontools and I just let it lapse because I wasn’t getting enough use out of it. Instead I focused on my main machine. When I went to upgrade Postgres I noticed that I didn’t have it running under daemontools.

I set to work getting it running. I even prepared a tutorial for you on running postgres under daemontools. I got everything working relatively easy using some of the how-to’s found on the net. Everything was working well and just as I was about finished I decided that bringing it up was good, but that I should verify shutdown.

I issued the required svc -d /service/postgres and everything went wacky. Logging in the log, every 3 minutes it kept saying “FATAL: the database system is shutting down'”. After examining ps output I found a defunct postmaster and a bunch of active connections.

After several hours of reading and trying I’m pretty sure I came up with the answer. From the postgres manual regarding pg_ctl:

In stop mode, the server that is running in the specified data directory is shut down. Three different shutdown methods can be selected with the -m option: “Smart” mode waits for all the clients to disconnect. This is the default. “Fast” mode does not wait for clients to disconnect. All active transactions are rolled back and clients are forcibly disconnected, then the server is shut down. “Immediate” mode will abort all server processes without a clean shutdown. This will lead to a recovery run on restart.

svc -d sends a TERM and CONT signal. It certainly wasn’t adding the -m option. But I did notice that the init script that ships with postgres (which was successful) performs the following on a ‘stop’ request:


stop)
echo -n "Stopping PostgreSQL: "
su - $PGUSER -c "$PGCTL stop -D '$PGDATA' -s -m fast"
echo "ok"
;;

In other words, the -m fast switch means it doesn’t wait for clients to disconnect. This is an important point if you run PHP with persistent connections. That explains why those process entries after svc -d were persistent despite the defunct postmaster. The svc -d /service/postgres was probably waiting on those persistent connections to disconnect. That’s my guess. Svstat kept saying it was waiting to shutdown.

Thus ends my attempt to run postgres under daemontools for now. Starting it was successful, reloading confiuration files was successful, but that’s where it ends. From a guy who has had both data loss and corrupt filesystems from non-clean postgres shutdowns, I’m pretty concerned what will happen when the computer reboots if it can’t disconnect those connections.

Or for that matter, what will happen if postgres shuts down, then supervise restarts it, but those persistent connections remained open. I’m betting Apache will be showing that it is unable to connect to the database server. I’ve noticed in the past that changing a users search path in postgres on command line (through psql) and then reloading a web page has absolutely no effect because the web server is holding open a persistent connection to which those changes don’t apply.

This will take some more examination, but for now it may be better to look at another tool such as monit, or consider getting rid of persistent connections.

Apache, Postgres and Drupal on a VPS

I really would prefer to have my own server but sticking a box in colo is expensive. Where I live, getting access to the physical colo space would be nearly impossible too. As a result I run on a VPS. Unfortunately VPS has some horrible limitations depending on who is on the box with you.

Recently I decided to move my personal blog off of b2evolution and stick it on Drupal. Too bad drupal is such a resource hog. Most CMS and blog software is though and it is really hard to find a minimalized, optimized blog software that uses any form of caching. Today, it hit the skids and my patience hit the wall. Argh!

I was converting my personal blog by hand because I only have about 30 entries so it didn’t pay to write a conversion script. Everytime I hit the ‘post’ button in Drupal I wound up with a blank screen, could not connect or worse, the results of any command in SSH terminal window showed a “could not allocate memory”. As a result, I had to do some fast tuning of something because I had to reboot the server after every blog post!

I chose to tackle Apache first because they have an Apache Performance Tuning Guide that helps a bunch. Postgres, well I’m running an old version that I really need to upgrade before I tackle configuration and optimization of it. That’s not a 30 minute job.

VPS, Apache and Low Memory

VPS has a low memory for sure. Even though you can sometimes burst more, in general it is pretty low. The very first thing in the Apache performance tuning guide is tackling memory.


You can, and should, control the MaxClients setting so that your server does not spawn so many children it starts swapping. This procedure for doing this is simple: determine the size of your average Apache process, by looking at your process list via a tool such as top, and divide this into your total available memory, leaving some room for other processes.

Using top and sorting by thread using ‘h’ I am able to see that the average Apache processes is using WAY TOO MUCH memory at 25M a piece – and 10 processes running. I don’t have time to tune the size now so I’ll tune the number of servers using very simple configuration parameters. Since we are using MPM Prefork, the directives can be found in extra/httpd-mpm.conf file under mpm_prefork_module.

Since I am supposed to be guaranteed 256M memory burstable to 1G I’ll optimize to the lower number. 256M / 25M is 10. Not including room for other processes. The current setting is 150!

From:


StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxClients 150
MaxRequestsPerChild 0

To:


StartServers 2
MinSpareServers 2
MaxSpareServers 5
MaxClients 10
MaxRequestsPerChild 0

I also changed it to only start 2 servers, only keep 2 spare servers instead of 5, and only allow 10 clients instead of 150. This will essentially create a queue if anyone is waiting but it shouldn’t dip into swap space and it will save a bunch of memory usage. This will of course be monitored. Once time permits and I am able to minimize the size of each Apache2 process (the other factor) then I will revisit this and likely increase the MaxClients accordingly.

Upgrading Postgres in parallel with your old version

Today I got caught up in PostgreSQL. A feature didn’t work as planned and when the mailing list responded to my plea for help it turned out the feature didn’t work fully in my version of Postgres, v.8.1.4.

Sad to say, I haven’t compiled a pgsql version for 13 months. I got so lazy that the last time I installed on my dev machine I just used the package that came with Kubuntu. Now I’m in a bind. I have a production machine and a dev machine to backup, upgrade and restore.

I learned one important lesson in this and that is to name your postgres installation directory the same as the current version. That kind of goes against my best judgement but it means that you can run two Postgres instances beside each other. Why would you want to do that? Take this line from the upgrade instructions:
Today I got caught up in PostgreSQL. A feature didn’t work as planned and when the mailing list responded to my plea for help it turned out the feature didn’t work fully in my version of Postgres, v.8.1.4.

Sad to say, I haven’t compiled a pgsql version for 13 months. I got so lazy that the last time I installed on my dev machine I just used the package that came with Kubuntu. Now I’m in a bind. I have a production machine and a dev machine to backup, upgrade and restore.

I learned one important lesson and that is to name your postgres installation directory the same as the current version. That kind of goes against my best judgement but it means that you can run two Postgres instances beside each other. Why would you want to do that? Take this line from the upgrade instructions:

To back up your database installation, type:

pg_dumpall > outputfile

To make the backup, you can use the pg_dumpall command from the version you are currently running. For best results … try to use the pg_dumpall command from [your new version of] PostgreSQL … since this version contains bug fixes and improvements over older versions. While this advice might seem idiosyncratic since you haven’t installed the new version yet, it is advisable to follow it if you plan to install the new version in parallel with the old version. In that case you can complete the installation normally and transfer the data later. This will also decrease the downtime.

Try to backup the existing database from the new version, which hasn’t been installed and can’t be installed until the old is backed up. That is, use the binary from your newly compiled version and run it on the old backend.

Seems like a problem, but if you run the new version in parallel you have a couple of benefits as I see it.

1. You don’t have to take your database down during upgrade! Once you are sure things are working on your new version then you simply flick the switch and it is nearly transparent. Don’t be like the company down the block who took their site down for 2 days to perform a db upgrade 🙂

2. If things go horribly wrong on your upgrade then you should still have the existing data, database and files to work from and find out the problem.

Keep these things in mind:

1. Change the install path by specifying it with the –prefix switch during ./configure Normally it installs to /usr/local/pgsql but because I already have a server installed and running there I chose to specify –prefix=/usr/local/pgsql-8.2.3

2. You can literally run it in parallel but change the port on the new server. In this fashion you can actually pipe the output of pg_dumpall to the input of the new server eliminating the SQL plain text file usually produced.

Make sure you block write access to the old version while performing the dump. When you are sure that everything dumped correctly you can shutdown the old server and start the new one on the old port.

There you have it. A nearly seamless database upgrade. A couple good reasons for running your new Postgres database upgrade in parallel with your current version.