All posts by Talliashimlistner

How To Disable WordPress Plugins Manually

If you get stuck being unable to login to WordPress because of a misbehaving plugin, you have a couple options.

You can login to your WordPress database using command line MySQL client or phpMyAdmin, and from there look for the wp_options table. If you are using a security plugin that changes table prefixes you’ll have to replace wp_ with your prefix. Once there, look for a value in the option_name column called active_plugins. You can find it like this:

SELECT * FROM wp_options WHERE option_name like '%plugin%';

This will give you a list of plugin options which you can examine. If you are using WPMU or WP3 Multisite and this is a globally enabled plugin, you’ll need to then use table name with the ID of the site in the prefix, like wp_2_options instead of wp_options. It used to be that option_name “active_plugins” would contain all plugins, but it appears from examining this that option_name “transient_plugin_slugs” contains plugins. If you choose to modify it, copy and paste the original value somewhere for easy editing and replacement once fixed.

The option I chose for this Multisite plugin that prevented login was to simply rename the folder of it in the plugins directory. That would cause WordPress not to load the bad files, because they were no longer in the expected location.

Once into the admin dashboard page, you can then disable the plugin completely and take whatever actions you need to repair, replace or trash it for good.

Mysql Client Switch for Vertical Output

Did you ever need to examine one line from a MySQL table and were unable to line up the column headers with the data because the line was too long?

Well no fears, it turns out the MySQL client has a switch (\G) you can use to arrange output vertically for easier reading. This is very handy when you are looking at only one line. Here’s an example:

mysql> SELECT * FROM emp_vw WHERE id = 10110 LIMIT 1 \G
*************************** 1. row ***************************
id: 10110
company_id: 21100
last_name: Stall
first_name: John
gender: MALE
address: 277 142nd Ave
city: New York
state: NY
phone: (888)555-1212
birthday: 1963-05-12
active: 1
timestamp: 2011-11-30 18:58:50
1 row in set (0.01 sec)

This is a great way to examine the content of the fields of any table for the purposes of investigation. It works very well for views and tables with many columns that form long lines when displayed horizontally. This can also be applied to your my.cnf file, but I prefer to leave it off so it can be enable with the MySQL client \G switch only when needed.

Mounting NFS causes error “mount: wrong fs type, bad option, bad superblock on …”

Some days the obvious just passes you by.

On a fresh install of Ubuntu server I found that trying to mount an NFS export from our NAS wouldn’t complete. I had the correct options in /etc/fstab on this machine and in /etc/exports on the NFS server as evidence by the success of other machines connected to it. Mounting it on this server gave the following error:

root@iknow:/var/log# mount /mnt/backup
mount: wrong fs type, bad option, bad superblock on,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount. helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so

After double checking everything several times the light bulb finally came on. nfs-common wasn’t installed!

sudo apt-get install nfs-common

It installed, I retried the mount, and it succeeded. Just remember that as a client, you have to have nfs-common installed if you want to mount an NFS export from the server.

MySQL ERROR 1396 (HY000): Operation CREATE USER failed

While creating a new user in MySQL today I accidentally pressed the return button while in the middle of entering a password via the command line client, which looked like this:

mysql> CREATE USER 'userguy'@'' IDENTIFIED BY '
'> ;
'> '''';;
'> ';
Query OK, 0 rows affected (0.00 sec)

Which isn’t what I wanted. Terminating the command didn’t work and the command succeeded, not what I wanted. So I had to fix it.

I issued a


command and was greeted with:

mysql> drop user userguy;
ERROR 1396 (HY000): Operation DROP USER failed for 'userguy'@'%'

That didn’t work either. Instead I did this:

delete from mysql.user where User = 'userguy';
delete from mysql.db where User = 'userguy';

And when I tried to create the user again I was still met with ERROR 1396.

How to solve MySQL ERROR 1396 (HY000): Operation CREATE USER failed

Once those user entries are removed from the User and db tables, you just have to flush privileges:

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

Now when you enter your CREATE USER command (with the correct syntax and a valid single line password), it should succeed.

And stop working when you are tired!

Creating MySQL Users and Granting Permissions

When should you create a new user in MySQL? That’s a question that you have to ask whenever you add a new application or restructure your data. I’ve been in places where every application connects as root, and obviously that’s very bad.

Only you know your situation. When I build web applications sometimes I’ll have a shared database, such as geographic information or lists or musical artists, that I want multiple applications to have access to. But only read access, they can’t change it. So root is out. But they also have their own database (in MySQL) or schema (in PostGreSQL) in which they will need to INSERT, UPDATE, DELETE and SELECT from tables. They may even need to CREATE tables, such as temp tables, depending on the situation. In these cases, such applications would require a dedicated user, because their are no commonalities that would allow this user account with its privileges to be reused by another application.

In this guide I’ll only deal with command line actions from MySQL client, not CPanel actions, but you’ll be able to transfer the concept to the different interface.

To create users you’ll most likely have to be root:

mysql -u root -p

Create the new user for this application:

CREATE USER 'username'@'hostname' IDENTIFIED BY 'password';

You’ll have to think about this


setting. If the only one accessing your database with this username is your application, and your MySQL process is on your local machine, then ‘localhost’ will do as a hostname. If you have a separate machine running MySQL as I do, then you’ll need to enter the hostname or IP address of your web server machine, as that is where the application will connect from.

CREATE USER 'username'@'webserver' IDENTIFIED BY 'password';

It is possible to use wildcards for the hostname, although you should think this through as generally it is a bad idea. If you are on an internal network and you have your web server cluster and SQL machine on the same subnet, you could do this:

CREATE USER 'username'@'10.20.30.%' IDENTIFIED BY 'password';

Which will allow 256 machines to connect to your MySQL server as that user. Think about the future use of your network before you do this.


Now you have a user who can’t do much. You’ll need to give them access to something. Since the user I’m creating will be the one populating the database with information and not just reading it, they will need greater privileges than the average web application that just reads and reports. The GRANT syntax can be found in the MySQL manual. My specific requirement allowed me to enter:


  • This GRANT is followed by a list of actions the user can take as seen in the manual page quoted above, (SELECT, DELETE, etc).
  • ON a certain database and tables – dbname.* means these privileges apply to ALL tables on the database called dbname.
  • TO username @ IP address – meaning if this user connects from another address they won’t receive these privileges

DON’T enter:
mysql> GRANT ALL ON *.* TO userguy@%

Which is the equivalent of giving them root privileges on all tables of all databases when connecting from anywhere. Pointless to try and manage users this way. And never grant *.* to a user anyway unless you aware of what all the * privileges are. Most people would be mortified to find out they used *.* and granted their web user the ability to DROP all their databases. Of course they don’t find out until after it has happened if they don’t know the user had that ability with *.* to start with.

Now, just to be clear everything is in effect, issue the



Keep in mind, that in newer versions, MYSQL will create a user when a GRANT is issued if the user doesn’t already exist. However, I prefer to explicitly CREATE a user, and keep track of it in a file.

Connecting remotely requires configuration settings changes. If you are not connecting locally, you’ll need to modify your MySQL’s my.cnf file to allow remote connections. This is part of your extended configuration and security, and I won’t discuss it in this article.

Now you can go about creating users and granting privileges with a total security plan in mind. Everyone can access only what they need, no one can access or change what they shouldn’t, and users can only connect from specific locations.

Optimizing Mysql Data Types

Optimizing MySQL datatypes is an important task for anyone, whether you are running a single website or a cluster of geographically distributed MySQL machines. Efficiency in your database driven application all begins with selecting the right datatypes for your columns.

In the process of a recent project I had to get data from a third party source and store it in a local MySQL database, for which we chose to build a dedicated MySQL server machine with a RAID 10 disk array. I had no idea what the data we retrieve would look like. All we had was a list of columns and nothing else. In order to get started we had to guess at the length of the data based on the column names assuming they were descriptive of the data they held. An initial table contained column names followed by simple VARCHAR(255) declarations. Totally unacceptable, but it was a start.

What you got there?

Once the data came back I had to examine each column in each table to see what they held. There are many ways of doing this including using MAX() and MIN() funtions like so:

mysql> SELECT MAX(LENGTH(first_name)) FROM emp;
| MAX(LENGTH(first_name)) |
| 14 |

This tells me that the maximum length of an employees first name is only 16 characters long. So why a VARCHAR(255) or a CHAR(100) or anything else? Let’s look at an example of the longest names in the table:

mysql> SELECT DISTINCT(first_name) FROM emp WHERE LENGTH(first_name) = 14;
| first_name |
| Thomas Michael |

That query gives us only the distinct uses of first_name that are 14 characters long. An examination of the real data is important when selecting datatypes, so you may want to do this again using > 10 in place of = 14, and you can even include a count of how many employees have large first_names, and any other queries that may help you understand the data you are working with.

The Average Joe

Another useful query is to determine the average length of the first name column:

mysql> SELECT AVG(LENGTH(first_name)) FROM emp;
| AVG(LENGTH(first_name)) |
| 5.9771 |

With the average length of a column being only 6 characters, does it make sense to define a fixed field of CHAR(14) to accomodate the longest field when only one employee has a first name that long? Would it be better to use a VARCHAR column anyway? A quick examination of how many names have each character count is required:

SELECT LENGTH(first_name) AS size,
FROM emp
GROUP BY LENGTH(first_name);

| size | COUNT(*) |
| 1 | 0 |
| 2 | 21 |
| 3 | 85 |
| 4 | 626 |
| 5 | 1259 |
| 6 | 1626 |
| 7 | 958 |
| 8 | 345 |
| 9 | 183 |
| 10 | 38 |
| 11 | 37 |
| 12 | 9 |
| 13 | 9 |
| 14 | 1 |


This tells me that most people in our system have first names with an average character length between 4 and 7. Posessing and understanding this information will allow me to make decisions on the best column type. This information is also valuable when creating partial indexes on VARCHAR columns.

Procedure Analyse

One function available with MySQL is PROCEDURE ANALYSE. The manual for it is here:

Replace 5.0 in the URL with your version.

The description from the manual for ANALYSE is:

ANALYSE() examines the result from a query and returns an analysis of the results that suggests optimal data types for each column that may help reduce table sizes.

That just took the guesswork out of your column declaration optimizations! Knowing your data is still critical because ANALYSE is still capable of guessing wrong, but it is another tool to assist you. To view the output of ANALYSE for the first name column, issue a query as follows:

SELECT first_name
FROM emp

| Field_name | Min_value | Max_value | Min_length | Max_length | Empties_or_zeros | Nulls | Avg_value_or_avg_length | Std | Optimal_fieldtype |
| schema.emp.first_name | A | Zyrida | 1 | 16 | 0 | 0 | 5.9771 | NULL | VARCHAR(16) NOT NULL |
1 row in set (0.01 sec)

As you can see in the output, PROCEDURE ANALYSE() returns a single row with:

  • the column name
  • the MIN and MAX values (not important for strings)
  • the MIN and MAX lengths (queries we ran manually above)
  • how many columns are empty or NULL
  • the AVG value (for numbers) or AVG length (for strings)
  • the population standard deviation (STD)
  • a recommended column definition called Optimal_fieldtype

Now, you have to be careful with this procedure when considering accepting the Optimal_fieldtype when passing the default arguments to PROCEDURE ANALYSE(). In a lot of cases, even in the case of a BOOLEAN type, it will return an ENUM, such as ENUM(‘0’, ‘1’). Close attention to detail is important in these cases. In the following example the column only holds one possible values at this time. It is an unfortunate case, but the data that is being returned to me uses this as a foreign key and as part of a primary key on many tables.

FROM company
| Field_name | Min_value | Max_value | Min_length | Max_length | Empties_or_zeros | Nulls | Avg_value_or_avg_length | Std | Optimal_fieldtype |
| | 20000 | 20000 | 5 | 5 | 0 | 0 | 20000.0000 | 0.0000 | ENUM('20000') NOT NULL |
1 row in set (0.00 sec)

In actuality, this ID is a numeric value, and further numeric ID values may be added over time, so a numeric data type would be more approriate than an ENUM. In order for PROCEDURE ANALYSE to return a correct number to us we’ll have to pass some arguments to it. According to the manual, the arguments are as follows:

max_elements (default 256) is the maximum number of distinct values that ANALYSE() notices per column. This is used by ANALYSE() to check whether the optimal data type should be of type ENUM; if there are more than max_elements distinct values, then ENUM is not a suggested type.

max_memory (default 8192) is the maximum amount of memory that ANALYSE() should allocate per column while trying to find all distinct values.

Retrying that query with arguments (1,24) we get:

FROM company

| Field_name | Min_value | Max_value | Min_length | Max_length | Empties_or_zeros | Nulls | Avg_value_or_avg_length | Std | Optimal_fieldtype |
| | 20000 | 2000 | 5 | 5 | 0 | 0 | 20000.0000 | 0.0000 | SMALLINT(5) UNSIGNED NOT NULL |
1 row in set (0.00 sec)

As you can see, optimize is now suggesting an Optimal fieldtype of SMALLINT(5) UNSIGNED NOT NULL which has a numeric value range of 0 to 65535 which is adequate for the purposes of our current and future ID allocation in this column.

Planning for the future

One thing to remember is that the data you are seeing today isn’t necessarily representative of the data you’ll get in the future, unless you have a really large set defined by someone else. You’ll have to be careful with the data types you define. Don’t make them unnecessarily large, but you can plan some wiggle room from the PROCEDURE ANALYSE optimal value. Use your mind here.

If you are creating your own data from scratch such as with a form entered on a website, then this technique will be most usefully applied by you after you’ve been up and running for some time and have some real data to analyze. If you are working with a small schema you’ll be able to ALTER your columns to the new data types after the fact.

Changing phpBB User Control Panel Welcome Message (L_UCP_WELCOME)

Trying to integrate a forum into an existing site with its own login already has been a challenging task. I’m getting a good understanding of the layout of phpBB though.

One thing I didn’t find much information about was changing the welcome message in the User Control Panel. It reads like this:

Welcome to the User Control Panel. From here you can monitor, view and update your profile, preferences, subscribed forums and topics. You can also send messages to other users (if permitted). Please ensure you read any announcements before continuing.

One problem, I’m integrating with an existing system which has its own account management built in along with custom private messaging, so the user won’t be able to change their email, password, or message others. So the message needs to be changed.

In your theme templates folder, this is found in the file ucp_main_front.html. Near the top it looks like this:


But the definition is a long way from there. I find these definitions to be scattered all over, with many found in the functions.php file, but in this case the L_ prefix denotes it is a language define. That means you’ll find the definition in the language/en/ucp.php file. EN if you are using English language that is.

BUT, if you have some nice editing software that can find definitions throughout the whole project directory, you’ll be disappointed when you couldn’t find L_UCP_WELCOME. That’s because you’ll find it under UCP_WELCOME instead. Confusing? Open the file listed above and you’ll find it is ONE great big array bearing the name $lang. Somewhere down the list you’ll find:

	'UCP_WELCOME'					=> 'Welcome to the User Control Panel. From here you can monitor, view and update your profile, preferences, subscribed forums and topics. You can also send messages to other users (if permitted). Please ensure you read any announcements before continuing.',

Just change it to whatever you like.

Personally I don’t like having to update the language file manually after an update, but that’s the current level of ability with this software. Change UCP_WELCOME to something you like and it will show up in place of the L_UCP_WELCOME placeholder in the template when the page is loaded. The other alternative, if you are editing the style template files for customization already, is to just hard code it in HTML, as long as you don’t plan on using other languages.

Updating Drupal from 5 to 6 causes “The post could not be saved” – Postgres version fix

I recently upgraded several Drupal sites to 6.2 Some were 5.1, 5.2, and 6 previously. Upon completion of what I thought was a successul upgrade, they all suffered from the same common problem – trying to save a new post resulted in a “The post could not be saved” error. There were also various other database access errors in logs. On one site it was observed that after trying to make a post, Drupal would take it upon itself to take the site offline with the famous “Welcome to your new Drupal website” message. The problem there turned out to be a permission access problem, but only occurred after a failed post was attempted. To repair, go to admin/content/node-settings and select the “rebuild permissions” button. Once the permissions were repaired it was fine – until you made another post and then Drupal repeated the offline cycle and the beginning of page access SQL errors in the log again.

Fortunately there were some hints as to the problem, but the real problem is that they were mostly for MySQL, and I was using Postgres. Here’s my solution to the problem.

First, the most verbose error message I got was on this very site, where it looked like this after trying to post:

* warning: pg_query(): Query failed: ERROR: duplicate key value violates unique constraint “node_revisions_pkey” in /path/to/drupal-6.2/includes/ on line 139.
* user warning: query: INSERT INTO node_revisions (nid, uid, title, body, teaser, log, timestamp, format) VALUES (0, 1, ‘test’, ‘test test test test test test test test test test test vv’, ‘test test test test test test test test test test test test test vv’, ”, 1294612270, 4) in /path/to/drupal-6.2/includes/ on line 3538.
* warning: pg_query(): Query failed: ERROR: duplicate key value violates unique constraint “node_vid_idx” in path/to/drupal-6.2/includes/ on line 139.
* user warning: query: INSERT INTO node (vid, type, language, title, uid, status, created, changed, comment, promote, moderate, sticky, tnid, translate) VALUES (0, ‘blog’, ”, ‘test’, 1, 1, 1294612270, 1294612270, 2, 1, 0, 0, 0, 0) in /path/to/drupal-6.2/includes/ on line 3538.
* warning: pg_query(): Query failed: ERROR: duplicate key value violates unique constraint “node_comment_statistics_pkey” in /path/to/drupal-6.2/includes/ on line 139.
* user warning: query: INSERT INTO node_comment_statistics (nid, last_comment_timestamp, last_comment_name, last_comment_uid, comment_count) VALUES (0, 1294612270, NULL, 1, 0) in /path/to/drupal-6.2/modules/comment/comment.module on line 607.
* The post could not be saved.

If you are using postgres on the command line via the psql client, you can examine all sequences using the command


You’ll see that both node_nid_seq and node_revisions_vid_seq have been created. If you’ve tried creating a number of new pages or posts and failed, you’ll notice if you use nextval() like:

SELECT nextval('node_nid_seq');

It will be quite a bit advanced from your actual last successful post’s nid, which you can see like this:

SELECT max(nid) FROM node;

However, if you do this:

SELECT nextval('node_revisions_vid_seq');

Make a note of it. Then find your actual last node_revisions max vid like this:

SELECT max(vid) FROM node_revisions;

And you’ll see it is probably only one higher.

At this point, examine the node table like this:

\d node

                                  Table "public.node"
  Column   |          Type          |                     Modifiers                      
 nid       | integer                | not null default nextval('node_nid_seq'::regclass)
 vid       | integer                | not null default 0
 type      | character varying(32)  | not null default ''::character varying
 uid       | integer                | not null default 0
 status    | integer                | not null default 1
 created   | integer                | not null default 0
 changed   | integer                | not null default 0
 comment   | integer                | not null default 0
 promote   | integer                | not null default 0
 moderate  | integer                | not null default 0
 sticky    | integer                | not null default 0
 mos_id    | integer                | 
 language  | character varying(12)  | not null default ''::character varying
 tnid      | integer                | not null default 0
 translate | integer                | not null default 0
 title     | character varying(255) | not null default ''::character varying

And you can correctly see under “Modifiers” that the default value for nid is the next value in the node_nid_seq, which is the equivalent of the more familiar MySQL AUTO_INCREMENT attribute.

Now try the same on node_revisions:

\d node_revisions

                       Table "public.node_revisions"
  Column   |          Type          |               Modifiers                
 vid       | integer                | not null default 0
 uid       | integer                | not null default 0
 body      | text                   | not null
 teaser    | text                   | not null
 log       | text                   | not null
 timestamp | integer                | not null default 0
 format    | integer                | not null default 0
 nid       | integer                | not null default 0
 title     | character varying(255) | not null default ''::character varying

And you’ll see that the default value for vid is zero (0). Of course it isn’t auto-incrementing and so you are getting errors trying to write because you already have a vid with the value 0 in it:

SELECT min(vid) FROM node_revisions;
(1 row)

So what is the solution?

ALTER TABLE node_revisions ALTER COLUMN vid SET DEFAULT nextval('node_revisions_vid_seq'::regclass);

Double check to make sure it went through:

\d node_revisions
                                   Table "public.node_revisions"
  Column   |          Type          |                          Modifiers                           
 vid       | integer                | not null default nextval('node_revisions_vid_seq'::regclass)
 uid       | integer                | not null default 0
 body      | text                   | not null
 teaser    | text                   | not null
 log       | text                   | not null
 timestamp | integer                | not null default 0
 format    | integer                | not null default 0
 nid       | integer                | not null default 0
 title     | character varying(255) | not null default ''::character varying

…And it looks good!

We almost switched to WordPress for one site because of this error, but thinking about converting 5 years and 1250 posts from PostgreSQL Drupal to MySQL WordPress didn’t sound like fun. Fortunately the fix was easy, but nailing down the problem took some time first. Hopefully this helped someone else.

Echo Layla Working in Ubuntu Linux

The Echo pro audio products are great. I’ve been using an Echo Layla 24/96 since about 2002. I primarily do my recording with 64Studio, but I like to listen to music while I work on programming in Ubuntu.

I haven’t had problems getting it working in most versions of Linux for the last few years. Easiest install was recognized right off the bat by Mandriva. My memory doesn’t serve me too well, but 64Studio works flawlessly with record low latencies.

But it didn’t’ work right out of the box with Ubuntu, and that’s no big deal. Finding out how to get it to work was tougher. Lot’s of people complaining that Linux doesn’t “just work.” Well, when I installed Layla on Windows it didn’t just work either, I had to install a driver. Fortunately in Linux the driver is already included. What’s missing is the mixer and firmware.

It’s too bad but there are many solutions out there including compiling ALSA from scratch. But the solution I found over at the Echo forums was much, much simpler. Here’s the excerpt from I want to use Layla 3g in Ubuntu

1) Add the Medibuntu ( repository to your software sources.
2) Install the alsa-firmware package to get the drivers.
3) Install alsa-tools-gui package to get echomixer (a mixer for Echo Audio cards).
4) If you’re using PulseAudio you need to manually add it to your /etc/pulse/ file *.
5) Reboot & enjoy high quality sound.

And that worked for me. Make sure after you reboot to have your volume down low on your speakers, then after it starts up load the echomixer program from your Sound menu, and adjust it properly.

No problem. You should have your Echo Layla, Layla24, and possibly 3g, Mia, Gina and other Echo pro audio cards working in Linux.

Upgrading PostgreSQL to a new major version

Upgrading PostgreSQL to a major version can be a challenge but isn’t difficult. You need to plan ahead and do some testing afterward, but PostgreSQL includes everything you need to make the change almost seamlessly. I give you a step by step rundown of how to make the upgrade and gain all the benefits that come with a new Postgres version.

Upgrading PostgreSQL

Upgrading PostgreSQL need not be hard. The creators have provided all facilities you need to upgrade nearly painlessly, and depending on the extent of your use, nearly seamlessly.

These notes were made when I was upgrading from PostgreSQL 8.2.6 to 8.3.0, but the information should apply equally to other versions. Keep in mind the version number are specific to these releases, and to substitute your own accordingly.

According to the PostgreSQL INSTALL doc, here’s their order of install (not upgrade), but the procedure is almost identical with the addition of backing up and restoring your old data.

gmake install
adduser postgres *for new install only
mkdir /usr/local/pgsql/data
chown postgres /usr/local/pgsql/data
su - postgres
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data >logfile 2>&1 &
/usr/local/pgsql/bin/createdb test
/usr/local/pgsql/bin/psql test


First, run configure. I changed the install path to reflect the version so that I can run the two servers in parallel.
./configure --prefix=/usr/local/postgres-8.3.0


Watch for errors. Run gmake, not make. Most systems have gmake as a symlink to a capable compiler. Pretty straight forward. The email list is there to help if you have problems with compilation, which is usually rare. Googling for particular errors is also helpful, and will reveal a fix if you’ve encountered a bug or platform specific problem.

gmake install OR gmake install-strip

If you are not using the --enable-debug option then consider using gmake install-strip when installing. This should reduce the file size of the binary files and libraries as well as save some memory while loading. I use this on any non-development machine.

Make the /data directory

Straight forward here. Create the directory and initialize the new database cluster.

mkdir /usr/local/postgres-8.3.0/data
chown postgres /usr/local/postgres-8.3.0/data
su - postgres
/usr/local/postgres-8.3.0/bin/initdb -D /usr/local/postgres-8.3.0/data

While running initdb you’ll see:

The default text search configuration will be set to "english".
creating dictionaries ... ok

This is evidence of the brand new tsearch which is integrated into the core. Many people had upgrade problems with tsearch in the past, and for me tsearch is an important tool, especially needed for those people migrating from MSSQL, so it is a great step forward to see this included in the core now. Please see below if you were using the old tsearch contrib module.

Compile contrib modules

After you perform the install I would say pause here. Take a look at the contrib directory in the source. Are you using any contributed modules? If so, I would say this is a good place to compile and install them, before you go restoring your old database. If you don’t do this, and the contrib modules had you insert some SQL into your database, when you restore your dump the SQL will be present, but the module won’t, and you will get errors. Read the README file in the top /contrib directory for the procedure. It is usually gmake all;gmake install . Don’t do the regression tests now because you aren’t inserting the SQL code that goes with the modules yet. Be aware that regression tests try to connect to the default post, that means the original server, and will probably fail if you run the new server in parallel on a different port later on. From the manual for contrib modules:

After a major-version upgrade of PostgreSQL, run the installation script again, even though the module’s objects might have been brought forward from the old installation by dump and restore. This ensures that any new functions will be available and any needed corrections will be applied.

Also, if this is version >= 8.3.0 and you are using an older version of tsearch you’ll need to read the manual pages Migration from Pre-8.3 Text Search. Loading a pre-8.3 database that contained contrib/tsearch into an 8.3 will cause problems. It caused my restore to fail. The manual suggests loading contrib/tsearch before restore because it will prevent loading the old objects from the old database.

Edit postgres.conf and other configuration files

Do a diff on your postgres.conf and pg_hba.conf files with the new version. If moving between major versions I would edit them by hand. When moving between minor versions last time I just copied the old over the new as suggested, and found errors in the log at startup because a configuration option had been removed and I was still calling it.

Backup your old database

Seems like an odd place to be performing the backup procedure doesn’t it? Yes, but the PostgreSQL manual has a good reason for it:

To make the backup, you can use the pg_dumpall command from the version you are currently running. For best results, however, try to use the pg_dumpall command from PostgreSQL 8.3.0, since this version contains bug fixes and improvements over older versions. While this advice might seem idiosyncratic since you haven’t installed the new version yet, it is advisable to follow it if you plan to install the new version in parallel with the old version. In that case you can complete the installation normally and transfer the data later. This will also decrease the downtime.

The reason here is that the pg_dump or pg_dumpall binary from the new release has bug fixes over the old version. Now it makes sense. With that in mind I backed up to a file first, just as a precaution.

/usr/local/postgres-8.3.0/bin/pg_dumpall -p 5432 -U postgres | bzip2 -zv --best > pre_8.3_dump.sql.bz2

This command is using the NEW version of the pg_dumpall binary (8.3.0) and telling it to run on the currently running PostgreSQL backend (8.2.6) which is on port 5432, and store the output in a bzipped filed in the current directory.

Start new PostgreSQL server on a different port

su - postgres
/usr/local/postgres-8.3.0/bin/postgres -p 5543 -D /usr/local/postgres-8.3.0/data >logfile-8.3.0 2>&1 &

This starts the new server in parallel to the old running at the same time by using port 5543 for restore and testing. We’ll change it back to the default port once we know everything works right. Make sure you ps aux to see both servers working. The first time I did this the random port I chose for the new server was in use by something else and PostgreSQL refused to start.

Loading the data

There are a couple ways to do this. You can reload the dump that you just saved, or you can pipe the output of the pg_dumpall command into the new postmaster process that is running on an alternate port. I usually prefer to go with the pipe.

/usr/local/postgres-8.3.0/bin/pg_dumpall -p 5432 -U postgres | /usr/local/postgres-8.3.0/bin/psql -p 5543 -U postgres

This runs a dump on the old server (port 5432) and immediately restores it into the new server (port 5543) using a pipe to psql. Use the pg_dumpall and psql binaries from the new version you just compiled to gain bugfixes and performace benefits.

Remember after you load the old data into the new server to apply the SQL files for any contrib modules to your new database. You should also check the output of your server file (logfile-8.3.0) for any notices errors using a grep command like cat -n logfile-8.3.0 | grep -i error . Also vacuum the entire cluster after reloading:
/usr/local/pgsql-8.3.0/bin/vacuumdb -p 5543 -a -z

Switch servers

You could run the new server on the alternate port for a period of time with some of your test or staging applications to make sure everything is working properly and that any of the feature changes that may affect your software have time to be changed. If you do this, then just reinit the cluster and do the parallel dump again when you are ready. If you don’t you’ll lose any data inserted in that time – which could be days.

Otherwise, making the switch is simple. Shutdown both servers, and then restart the new server on the same port that the old one ran on. If you started the original PostgreSQL with an init script then shut it down using that init script, edit the script to refer to the new version, and use that init to start the new server with /etc/init.d/postgres start

Lastly, remember to change your environment to point to the new postgres version. This isn’t strictly required but if you use psql on the command line often you should set your PATH and MANPATH to refer to the correct location. If your postgres is installed in /usr/local/postgres-8.3.0 for example, you could use a symbolic link called /usr/local/pgsql that points to the currently installed PostgreSQL version. That way you won’t have to change profile, or setup scripts for web based interfaces, etc.

Upgrading PostgreSQL to a major version takes some time, but it isn’t difficult. If you employ this same procedure for your minor version upgrades you’ll find it become second nature. Practise makes perfect.