All posts by Talliashimlistner

Forcing Redirect to a www URL with Apache Rewrite for SEO

Redirecting requests for the non www version of your domain to the www version only can help to boost your SEO, eliminate duplicate search engine entries and improve your ranking. And it is easy to do with Apache and mod_rewrite.

I registered a new domain about a week ago and then posted my new social networking style project to go live. I was still fiddling with server upgrades so I didn’t tell anybody about it. Sure enough Google comes crawling and takes all pages. Excellent.

I went to Google and performed a:


site:example.com

…and it returned all my pages. Excellent again, fully indexed in less than a week (but not showing in search results). Wait a minute. I forgot the www. I try it again with:


site:www.example.com

And Google returns one page – the doorway for my web host! Sometime after I registered the domain and pointed the DNS to my machine Google crawled the www. version of the URL. I don’t know how or where it got the link. Probably from a list of recently registered domains. Then after that week was up and I posted the full site, Google crawled the non-www version of the URL, and indexed both.

Anyone see a problem there? Well first, I’ve got a doorway page for my host showing up for the www version of my URL! Second, if Google crawls both versions on their own without any links on the net, sooner or later they are going to see duplicate content. Both versions will eventually be the same I hope. Third, they are crawling the two versions at different rates, giving out of date results for one of them. Last, they are splitting up the results for the domain, and I don’t see that being good.

A solution that has been proposed by others in the past, and one that I would recommend again, is to redirect all traffic from non-www requests to the the www host version of your domain, if that is the version you are using. If you are using Apache and have mod_rewrite you can do the following:


<IfModule mod_rewrite.c>

RewriteEngine on
Rewritecond %{HTTP_HOST} !^www\.example\.com
RewriteRule ^/(.*) http://www.example.com/$1 [R=301,L]
</IfModule>

I did this in my http-vhosts.conf file but I believe it should work equally well in .htaccess . See note at the bottom of the page for the differences.

The first Rewritecond line says for every entry where the HTTP_HOST DOES NOT EQUAL www.example.com …. then redirect them to www.example.com. This won’t work if you subdomains like subdomain.example.com because it will redirect those requests to www.example.com as well.

Very important, don’t forget the R=301 which is a seamless HTTP redirect using the status code 301. According to W3.org:

301 Moved Permanently

The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs.

So Google SHOULD remove old references to the non-www domain and all your future results should look the way you like.

Proponents of SEO will tell you this automatic redirect to the www version of your domain using Apache and mod_rewrite is good for ranking, elimination of duplicate entries and boosting your visibility. Give it a try if you haven’t yet and see how it works for you.

UPDATE: If you put this in your root Apache configs instead of .htaccess make sure you write ^/(.*) instead of (.*) or you may get a double slash between your domain name and pages which Google will also consider duplicate content.

Monitoring the Postgres Daemon

Following up on Running Postgres with Daemontools – Shutdown Errors, the Postgres mail list had some advice on monitoring and automatically restarting Postgres.

It seems the danger in automatically restarting the postmaster process from Postgres is a dangerous proposition. As mentioned in the mail list, if a process quits for whatever reason, daemontools will simply restart it. Now, that could theoretically fail and restart 3600 times per hour, every hour, every day without you ever being notified that there is a problem that requires attention. This may be fine for some things, but not Postgres.

Some people indicated they use a custom script to try and connect to postgres, and if the postgres fails they are notified. Some said the program monit will do the same job. I believe monit is quite extensible and will allow you to be notified as well as restart the server.

Still another person said they run the nagios monitoring program and are notified by email and pager of a failure.

Whatever the solution is, it seems only right to me that you should be notified of a failure if you so desire. At the same time, the mechanism that is handling the process should provide a clean shutdown. Apparently monit does that.

While I can’t advocate daemontools for Postgres, I would certainly say it is time to revisit nagios and monit. I’ve used nagio is the past but had resource issues with it. I’ve perused the monit documentation and enjoyed the flexibility. Now I’ll be looking at them both again and reporting back.

Running Postgres with Daemontools – Shutdown Errors

Running anything under daemontools seems like a great idea, but I’ve had a particularly bad time trying to shutdown postgres under daemontools while running Apache and PHP with persistent postgres connections.I had a VPS that was running postgres with daemontools and I just let it lapse because I wasn’t getting enough use out of it. Instead I focused on my main machine. When I went to upgrade Postgres I noticed that I didn’t have it running under daemontools.

I set to work getting it running. I even prepared a tutorial for you on running postgres under daemontools. I got everything working relatively easy using some of the how-to’s found on the net. Everything was working well and just as I was about finished I decided that bringing it up was good, but that I should verify shutdown.

I issued the required svc -d /service/postgres and everything went wacky. Logging in the log, every 3 minutes it kept saying “FATAL: the database system is shutting down'”. After examining ps output I found a defunct postmaster and a bunch of active connections.

After several hours of reading and trying I’m pretty sure I came up with the answer. From the postgres manual regarding pg_ctl:

In stop mode, the server that is running in the specified data directory is shut down. Three different shutdown methods can be selected with the -m option: “Smart” mode waits for all the clients to disconnect. This is the default. “Fast” mode does not wait for clients to disconnect. All active transactions are rolled back and clients are forcibly disconnected, then the server is shut down. “Immediate” mode will abort all server processes without a clean shutdown. This will lead to a recovery run on restart.

svc -d sends a TERM and CONT signal. It certainly wasn’t adding the -m option. But I did notice that the init script that ships with postgres (which was successful) performs the following on a ‘stop’ request:


stop)
echo -n "Stopping PostgreSQL: "
su - $PGUSER -c "$PGCTL stop -D '$PGDATA' -s -m fast"
echo "ok"
;;

In other words, the -m fast switch means it doesn’t wait for clients to disconnect. This is an important point if you run PHP with persistent connections. That explains why those process entries after svc -d were persistent despite the defunct postmaster. The svc -d /service/postgres was probably waiting on those persistent connections to disconnect. That’s my guess. Svstat kept saying it was waiting to shutdown.

Thus ends my attempt to run postgres under daemontools for now. Starting it was successful, reloading confiuration files was successful, but that’s where it ends. From a guy who has had both data loss and corrupt filesystems from non-clean postgres shutdowns, I’m pretty concerned what will happen when the computer reboots if it can’t disconnect those connections.

Or for that matter, what will happen if postgres shuts down, then supervise restarts it, but those persistent connections remained open. I’m betting Apache will be showing that it is unable to connect to the database server. I’ve noticed in the past that changing a users search path in postgres on command line (through psql) and then reloading a web page has absolutely no effect because the web server is holding open a persistent connection to which those changes don’t apply.

This will take some more examination, but for now it may be better to look at another tool such as monit, or consider getting rid of persistent connections.

Validating POST values from HTML in PHP revisited

In a recent article titled “Submitting POSTed HTML Forms and Registering Variables in PHP” we examined a better way to treat tainted POST variables passed to your PHP script from an HTML form. We decided to create an array called $FORM_FIELDS that would contain only the names of the elements that we wanted in our new array thereby cleaning up all fields in one statement (loop) and omitting unwanted fields from becoming part of memory.

In a recent article titled “Submitting POSTed HTML Forms and Registering Variables in PHP” we examined a better way to treat tainted POST variables passed to your PHP script from an HTML form. We decided to create an array called $FORM_FIELDS that would contain only the names of the elements that we wanted in our new array thereby cleaning up all fields in one statement (loop) and omitting unwanted fields from becoming part of memory or registered session variables.

The creators of PHP went one further in this step by making the $FORM_FIELDS array a multi-dimensional array. They added a couple fields in this array to represent the data type and an additional function to process the data.

The array may look like this:

$FIELDS = array(
'country_id' => array('type' => 'int', 'function' => 'addslashes')
);

Now when you process your loop as done in the previous article you would just add a line for the datatype:


if (isset($FIELDS[$name]['type']))
{
$tmp[$name] = settype($FIELDS[$name], $FIELDS[$name]['type']);
}

The reason this is important is incoming values from an HTML form all seem to be of type ‘string’. If you are inserting this information into a database or otherwise performing calculations on it then it is nice to know it is of the correct data type that your functions expect.

But there’s more. If someone is attacking your program by sending POST values different from those you expect or different from those in your form then this will help.

You simply set the data type before you perform validation. If you have a field where a user selects the country name from a drop down list box and the value sent to your form is the country_id then you would pick ‘integer’ as the data type. But what happens when a user sends a string? If you try inserting it to a database you get an error. If you try to compare it against a list of known country_id’s for validation purposes you will also get an error.

If you perform the settype() function in PHP before your validation you will either set it to the correct datatype or get a value of zero. If a malicious user sent a string of ‘Aruba’ for your country_id field and you cast it to an integer you would get a zero. Anything other than an INT would become a zero. You can then proceed with your validation which would presume that a value of zero is not valid.

You will want to take a look at the PHP manual section called “Type Juggling.” There are certainly some caveats and you may not expect some conversions to be logical. For example, if you had the string “10thingsIhate” and you cast it to an INT you would have int(10).

Brush up on the PHP types. They can save you some work in your validation of HTML form input in your PHP scripts.

PHP’s APC cache and how it relates to Apache Bloat

You just finished your install of Apache, PHP and APC cache on a VPS and you’re beat. You spent about a day going over every configuration and compile option trying to eek performance out of your VPS machine and web server. Then you take a look at the output of top or ps and notice, hey, my Apache process is 150 Megs! Well isn’t that grand. Compiling the Big Apache on a limited resource VPS can be a challenge for those of us who like to tinker. On the last VPS I got I scrapped the packaged stuff and started on compiling an Apache for the machine. The problem is trying to meet your needs with the limited memory you usually get on these machines. Although I like lighttpd, I’m not ready to give it a go on a production server that is being monitored by the media all the time. I need something as guaranteed as I can get. Although lighttpd may be such a beast, I’m not going to test it out on this site at this time!

I sat down and started to go over all configuration option for Apache, PHP and APC to get a nice small package to do the job. I was surprised how much junk PHP has compiled in by default. Compile it once and execute a phpinfo() and you’ll see. Just start with the “–disable” switches for anything you don’t want.

After about a day of thinning out the size of my stack I slipped into TOP and used the SHIFT-A toggle to see the alternate views. I noticed something odd…..

3  PID %MEM  VIRT SWAP  RES CODE DATA  SHR nFLT nDRT S  PR  NI %CPU COMMAND
 24140  0.1 78592  69m 7228  504 2124 2952    6    0 S  16   0    0 httpd
 11329  0.1 78408  69m 6908  504 1964 2736    2    0 S  16   0    0 httpd
 24141  0.1 78564  70m 6752  504 2120 2492    0    0 S  16   0    0 httpd
 24142  0.1 78564  70m 6732  504 2120 2476    0    0 S  16   0    0 httpd
 27813  0.1 78560  70m 6684  504 2116 2432    0    0 S  16   0    0 httpd

Doesn’t that seem like a lot of VIRT and SWAP for Apache2? Yes, it is. If you used the suggested 128 for apc.shm_size it will be a lot bigger too! I used 64M and you can see that at 78592 minus the 64M shared mem for the APC cache, the actual size is around 13M for apache with a 6.7M RES. That makes me feel better! I was so worried I had gotten something horribly wrong.

If you happen to run across this kind of thing in testing, simply set your apc.enabled in php.ini to ‘0’ to disable it, restart Apache and check ‘top’ again. Likely nothing to worry about.

The machine I am running is only hosting 4 sites so 64M is a good starting point, but what you should do is copy the file ‘apc.php’ from your APC source directory into a folder on your webserver and visit it frequently to see how it behaves. Depending on your other settings like the cache lifetime and garbage collection, if you have a large portion of FREE and a small USED then you can probably set the apc.shm_size lower. If it makes you feel better 🙂

PHP 5.2 and Drupal 4.7.4 don’t work together

While doing a software upgrade on a server today I ran into some problems. Apparently Drupal 4.7.4 and PHP 5.2.1 don’t work together.

After upgrading PHP, Apache and APC cache I couldn’t get Drupal to work on all 7 sites on one machine. I could login to any drupal site as the admin and get shown my user page, but when trying to do anything at all as an authenticated user it considered me logged out.

I initially thought APC had something to do with it and began to disable it. That didn’t fix it. On a haunch and after a lot of reading with no answer I decided to install a lesser version of PHP and that fixed it. PHP 5.1.1 was laying around and did the trick. Unfortunately I needed 5.2 to fix a mem leak!

I decided to update the Drupal sites to the latest version first to see if that helps. Lo and behold when I visit the Drupal download page I am greeted with:

PHP 5.2 compatibility is only available from Drupal 4.6.11 / 4.7.5 / 5.x.

That pretty much cinches it. I need PHP 5.2 to fix a memory leak so I’ll take time to upgrade to Drupal 5 now.

Be warned, if you are running Drupal 4.7.4 sites and attempt to upgrade to PHP 5.2 you may be faced with being unable to login or navigate as an authenticated user in Drupal. Downgrade to PHP 5.1, upgrade to > Drupal 4.7.4 and then continue your upgrade to PHP 5.2 and you should be alright.

Apache, Postgres and Drupal on a VPS

I really would prefer to have my own server but sticking a box in colo is expensive. Where I live, getting access to the physical colo space would be nearly impossible too. As a result I run on a VPS. Unfortunately VPS has some horrible limitations depending on who is on the box with you.

Recently I decided to move my personal blog off of b2evolution and stick it on Drupal. Too bad drupal is such a resource hog. Most CMS and blog software is though and it is really hard to find a minimalized, optimized blog software that uses any form of caching. Today, it hit the skids and my patience hit the wall. Argh!

I was converting my personal blog by hand because I only have about 30 entries so it didn’t pay to write a conversion script. Everytime I hit the ‘post’ button in Drupal I wound up with a blank screen, could not connect or worse, the results of any command in SSH terminal window showed a “could not allocate memory”. As a result, I had to do some fast tuning of something because I had to reboot the server after every blog post!

I chose to tackle Apache first because they have an Apache Performance Tuning Guide that helps a bunch. Postgres, well I’m running an old version that I really need to upgrade before I tackle configuration and optimization of it. That’s not a 30 minute job.

VPS, Apache and Low Memory

VPS has a low memory for sure. Even though you can sometimes burst more, in general it is pretty low. The very first thing in the Apache performance tuning guide is tackling memory.


You can, and should, control the MaxClients setting so that your server does not spawn so many children it starts swapping. This procedure for doing this is simple: determine the size of your average Apache process, by looking at your process list via a tool such as top, and divide this into your total available memory, leaving some room for other processes.

Using top and sorting by thread using ‘h’ I am able to see that the average Apache processes is using WAY TOO MUCH memory at 25M a piece – and 10 processes running. I don’t have time to tune the size now so I’ll tune the number of servers using very simple configuration parameters. Since we are using MPM Prefork, the directives can be found in extra/httpd-mpm.conf file under mpm_prefork_module.

Since I am supposed to be guaranteed 256M memory burstable to 1G I’ll optimize to the lower number. 256M / 25M is 10. Not including room for other processes. The current setting is 150!

From:


StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxClients 150
MaxRequestsPerChild 0

To:


StartServers 2
MinSpareServers 2
MaxSpareServers 5
MaxClients 10
MaxRequestsPerChild 0

I also changed it to only start 2 servers, only keep 2 spare servers instead of 5, and only allow 10 clients instead of 150. This will essentially create a queue if anyone is waiting but it shouldn’t dip into swap space and it will save a bunch of memory usage. This will of course be monitored. Once time permits and I am able to minimize the size of each Apache2 process (the other factor) then I will revisit this and likely increase the MaxClients accordingly.

Upgrading Postgres in parallel with your old version

Today I got caught up in PostgreSQL. A feature didn’t work as planned and when the mailing list responded to my plea for help it turned out the feature didn’t work fully in my version of Postgres, v.8.1.4.

Sad to say, I haven’t compiled a pgsql version for 13 months. I got so lazy that the last time I installed on my dev machine I just used the package that came with Kubuntu. Now I’m in a bind. I have a production machine and a dev machine to backup, upgrade and restore.

I learned one important lesson in this and that is to name your postgres installation directory the same as the current version. That kind of goes against my best judgement but it means that you can run two Postgres instances beside each other. Why would you want to do that? Take this line from the upgrade instructions:
Today I got caught up in PostgreSQL. A feature didn’t work as planned and when the mailing list responded to my plea for help it turned out the feature didn’t work fully in my version of Postgres, v.8.1.4.

Sad to say, I haven’t compiled a pgsql version for 13 months. I got so lazy that the last time I installed on my dev machine I just used the package that came with Kubuntu. Now I’m in a bind. I have a production machine and a dev machine to backup, upgrade and restore.

I learned one important lesson and that is to name your postgres installation directory the same as the current version. That kind of goes against my best judgement but it means that you can run two Postgres instances beside each other. Why would you want to do that? Take this line from the upgrade instructions:

To back up your database installation, type:

pg_dumpall > outputfile

To make the backup, you can use the pg_dumpall command from the version you are currently running. For best results … try to use the pg_dumpall command from [your new version of] PostgreSQL … since this version contains bug fixes and improvements over older versions. While this advice might seem idiosyncratic since you haven’t installed the new version yet, it is advisable to follow it if you plan to install the new version in parallel with the old version. In that case you can complete the installation normally and transfer the data later. This will also decrease the downtime.

Try to backup the existing database from the new version, which hasn’t been installed and can’t be installed until the old is backed up. That is, use the binary from your newly compiled version and run it on the old backend.

Seems like a problem, but if you run the new version in parallel you have a couple of benefits as I see it.

1. You don’t have to take your database down during upgrade! Once you are sure things are working on your new version then you simply flick the switch and it is nearly transparent. Don’t be like the company down the block who took their site down for 2 days to perform a db upgrade 🙂

2. If things go horribly wrong on your upgrade then you should still have the existing data, database and files to work from and find out the problem.

Keep these things in mind:

1. Change the install path by specifying it with the –prefix switch during ./configure Normally it installs to /usr/local/pgsql but because I already have a server installed and running there I chose to specify –prefix=/usr/local/pgsql-8.2.3

2. You can literally run it in parallel but change the port on the new server. In this fashion you can actually pipe the output of pg_dumpall to the input of the new server eliminating the SQL plain text file usually produced.

Make sure you block write access to the old version while performing the dump. When you are sure that everything dumped correctly you can shutdown the old server and start the new one on the old port.

There you have it. A nearly seamless database upgrade. A couple good reasons for running your new Postgres database upgrade in parallel with your current version.

DDoS DNS leads to User Interface Woes

When I moved my server a little more than a year ago I decided that I would try to get away from System Admin tasks by offloading services where possible. The first one to go was DNS. DNS is simple to manage, but just one more thing to look after. I had bought into the marketing hype about having off-site DNS so I decided to try a third party DNS. Unfortunately, having a service that many people rely on for ALL their services is just a target for attackers. After several DDos attacks with at least two providers I decided it was time to manage my own DNS again.

After my DNS was all setup I decided to retain those other third party people for my DNS secondary. After nearly a week of frustration I discovered one very important thing – not everyone that CAN program a computer SHOULD program a computer.
When I moved my server a little more than a year ago I decided that I would try to get away from System Administration tasks by offloading services where possible. The first one to go was DNS. DNS is simple to manage, but just one more thing to look after. I had bought into the marketing hype about having off-site DNS so I decided to try a third party DNS. Unfortunately, having a service that many people rely on for ALL their services is just a target for attackers. After several DDoS attacks with at least two providers I decided it was time to manage my own DNS again.

After my DNS was all setup I decided to retain those other third party people for my DNS secondary. After nearly a week of frustration I discovered one very important thing – not everyone that CAN program a computer SHOULD program a computer.

Sure, you know your network services, as much as anyone really can understand them, and you’ve administered a box for a while. You’ve got a great idea for a free online service so you set out at hacking away a user interface to allow your potential users to connect with your good idea. The problem is, you probably aren’t a programmer. Sure, you can figure out the semantics of a language pretty easy because you already know Shell script, but that certainly doesn’t make you an accomplished application developer.

How do I know? Because I’ve used your applications. Here’s some things that frustrated me:

– inaccurate error messages
– no error messages on an error
– the WRONG error message for an error
– a blank screen instead of an error message
– your PHP path and MySQL info echoed to screen on an error
– assuming the error was MY fault

Of course that is just a mild sampling that kept me from using a service properly in the past week. We really don’t know what goes on behind the scenes. When you built your application was there a white board involved? Did you have developer meetings? Is there a printout of a schema? Or did you just sit down and start typing until it looked good? Whatever way you chose, it didn’t work, it didn’t work well, or it just plain failed.

Either way, not everyone is a programmer. Too bad that scripting for the Internet is so easy that everyone has had a crack at it. Most of those people would probably get queasy if they were asked to rewrite that application in the C programming language.

We are taught to examine and verify user input but that makes us assume that all problems are caused by the user. Make sure you put in checks on your own code. Check for database connects, rows returned, function returns, etc. This way you can debug your own code as well. I used to put secret checks into my code that would either log or email me on an unexpected problem that didn’t throw an error.

As you can guess, I quit using this service. It was too frustrating to setup and I could never be sure that the values I input in the user interface were accepted. Trying to bypass the inconsistencies was remnant of hacking on a TRS-80. I think we can all learn a good lesson about user interface design when faced with problems we see on other sites. The key to a better programmer is remembering what that was, and then applying it to your sites. Sometimes you can be too close to your own project to see them.

Drupal multisite configuration problems

Drupal multisite setup configuration

I’ve adopted Drupal for a majority of my online activities in the past year. It has enough of the things I like and is much faster and more stable than previous CMS or forum software I’ve used. However, some documentation seems to be lacking. Multisite configuration with Apache using a single Drupal codebase is one area.

I finally decided to try a project in which I would use a common codebase for Drupal across all of my websites. That is, rather than having a directory for each website that has a Drupal installation in it, I thought I would take advantage of Drupal’s multisite functionality by having one drupal installation (codebase) in a central directory and have all the project websites point to it.

Drupal multisite advantages

The advantages of Drupal’s multisite feature are easy to spot.

First, a common codebase means only one codebase to change during an update – even though drupal requires you to update each websites database separately.
Second, using an intermediary cache like APC for PHP means that you can use less server memory caching files across multiple websites because the core files are the same.

Despite this great feature and its advantages, there is much confusion at the Drupal.org website from patrons as to what is the correct way to set this up. The Drupal install file goes into it pretty good, but leaves out one important detail – the web server configuration. Some purists say that is an Apache issue and we should leave that to the Apache mailing list but hey, PHP does a pretty good job of detailing the Apache configuration in their instructions so Drupal could too.

Most people who administer their own websites would follow a ritual in creating a new website. Create a directory for the new website, create the www subdirectory and then configure the Apache DirectoryRoot directive. With a single codebase used by multiple domains and websites you need a different approach.

The missing multisite ‘key’

The most important thing here is to make sure each website’s DocumentRoot in the Apache configuration points to the common drupal directory. This is an unusual configuration for most people but it works well.

The idea is that Drupal receives information from PHP (and Apache) to tell it which website it is supposed to serve. When it determines that, it will retrieve the correct configuration file (database passwords and URL base) from the Drupal ‘sites/’ subdirectory. That’s how it can determine which website to display.

Here is a quick breakdown.

Install Drupal in a common directory. Here I chose /var/lib/drupal-4.7.4

Caveats

Some people use a symbolic link to point Drupal to the correct distribution. I’ve done this but I suggest not doing it. Instead install your Drupal installation in a single directory titled according to it’s version number. That is, use the same directory structure and names that Drupal gave when you extracted the zip or tarball.

Why? If you have a couple sites and it comes time to upgrade you can run into trouble. It is easy to just recreate a symbolic link pointing ‘drupal/’ to ‘drupal-4.7.4’ but unfortunately that affects ALL your websites instantly. Not good on a production server. If you have 50 websites this means that you have 50 sites using the new codebase and the same 50 websites awaiting your hand to manually update their databases using Drupal’s supplied script. If any of those 50 are accessed during this wait period you’ll be in trouble.

The other disadvantage of the symbolic link comes if you are using third-party supplied modules. You forgot that 5 out of your 50 websites are using a module that isn’t being maintained on the same schedule as the Drupal project and guess what? It breaks your website. I’ve found that frequently a misbehaving module or even an errant ‘files’ path in the Drupal settings will disable all other Drupal modules. Best to avoid this altogether.

You can save yourself from both of these negative scenarios by simply putting Drupal in the supplied directory name and then adjusting your Apache DocumentRoot directive as you update them. It is an extra step but very easy.

The final advantage of not using a symbolic link is that you can hold a stable website. That is, you can have a couple different versions of Drupal on your server used by different websites. Several hosting providers do this with PHP and that is a very good example. If you upgrade to PHP 6 for one project you’ll find it doesn’t work with Drupal so you need to keep an old version of PHP installed for non-compliant websites. With Drupal I’ve found that once in a while a website gets working just right and I don’t want to update it. Or I quit monitoring it. Or it is an informational content only website with no subscriptions allowed. Or a million other reasons. Basically I will ‘freeze’ that website and not allow anymore code updates. Or, say you have a module that you use but the author abandoned it long ago. It won’t work with Drupal 6 for example. Simply freeze the website and only update your other websites with the new Drupal codebase. Keeping a couple distributions around can be handy but it means that you can’t point them to the codebase with a symbolic link.

Unzip the new codebase in parallel to the old.

# tar zxvf drupal-4.7.x.tar.gz

Backup the database for the website you are about to update.
Exact commands vary with which database you are using.

Edit Apache config file to change one website at a time.

# vi extra/httpd-vhosts.conf

Change:


DocumentRoot /var/www/example.com/www
ServerName www.example.com

To:


DocumentRoot /var/lib/drupal-4.7.4
ServerName www.example.com

Restart Apache so the config takes effect

# apachectl graceful

Visit the Drupal supplied database update URL for your website.

http://www.example.com/update.php

Watch for errors and check your logs. Visit the website and check the settings to make sure all modules still show up under the admin -> settings menu.

If successful, continue on with the next website on your server.