Category Archives: System Administration

Postfix, Dovecot and PostGreSQL Setup

In trying to setup a simple mail server on Ubuntu for a couple of small virtual domains with limited users I came across a couple excellent articles, but they were written for use with MySQL.  I have a machine with limited resources running custom software using PostGreSQL.  Although flat files would have been acceptable, I didn’t have the same luck in breadth of documentation.  So, I decided to work with the excellent articles I found and adapt them to Postgres for my server.

The best guide I found was from Linode, and is geared to help customers setting up Email with Postfix, Dovecot, and MySQL on their excellent platform.  The second guide I found left a lot to be desired, but since it was Postgres specific I was able to take those pieces from it and adapt it to the Linode guide.  That guide was called How to setup a postfix email server with Dovecot and published as a community tutorial from DigitalOcean.  While DigitalOcean and Linode are tight competitors, Linode wins out for having the better mail server how-to.

I’m going to mainly show what changes need to be done to the Linode guide in order to make it work with Postgres.  Otherwise use that guide for complete instructions. I’ll assume you already have a Postgres server up and running on that machine.

Postgres – Creating the Database

Your first change will be under the “Creating the Database” heading.  Let’s start by using the DigitalOcean guide to create the user.  Let’s try and keep the Linode naming convention as much as possible though.

First follow the adjustments to the  /etc/postgresql/vers.ion/main/pg_ident.conf  file.

mailmap         dovecot                 mailuser
mailmap         postfix                 mailuser
mailmap         root                    mailuser

Reload postgres and setup the user.

# sudo -u postgres psql
postgres=# CREATE USER mailuser;
postgres=# REVOKE CREATE ON SCHEMA public FROM PUBLIC;
postgres=# REVOKE USAGE ON SCHEMA public FROM PUBLIC;
postgres=# GRANT CREATE ON SCHEMA public TO postgres;
postgres=# GRANT USAGE ON SCHEMA public TO postgres;
postgres=# CREATE DATABASE mailserver WITH OWNER mailuser;
postgres=# \q 
# sudo psql -U mailuser -d mailserver
postgres=# \c mailserver
CREATE SEQUENCE seq_mail_domain_id START 1;
CREATE SEQUENCE seq_mail_user_id START 1;
CREATE SEQUENCE seq_mail_alias_id START 1;

CREATE TABLE virtual_domains (
  domain_id INT2 NOT NULL DEFAULT nextval('seq_mail_domain_id'),
  domain_name varchar(50) NOT NULL,
  PRIMARY KEY (domain_id)
);


CREATE TABLE virtual_users (
  user_id INT2 NOT NULL DEFAULT nextval('seq_mail_user_id'),
  domain_id INT2 NOT NULL,
  password varchar(106) NOT NULL,
  email varchar(100) NOT NULL,
  PRIMARY KEY (user_id),
  FOREIGN KEY (domain_id) REFERENCES virtual_domains(domain_id) ON DELETE CASCADE
);

CREATE TABLE virtual_aliases (
  alias_id INT2 NOT NULL DEFAULT nextval('seq_mail_alias_id'),
  domain_id INT2 NOT NULL,
  source varchar(100) NOT NULL,
  destination varchar(100) NOT NULL,
  PRIMARY KEY (alias_id),
  FOREIGN KEY (domain_id) REFERENCES virtual_domains(domain_id) ON DELETE CASCADE
);

To add your domains to the database:

INSERT INTO virtual_domains (domain_name)
VALUES
  ('example.com'),
  ('hostname.example.com'),
  ('hostname'),
  ('localhost.example.com');

Do a simple select from the virtual_domains table and note the id of the domain you wish to add addresses for:

SELECT * FROM virtual_domains;

According to the postgres document we referenced earlier, you have to get the password from command line like so:

# doveadm pw -s sha512 -r 100
Enter new password: ...
Retype new password: ...
{SHA512}.............................................................==

Now you need to add some email addresses like so:

INSERT INTO virtual_users
  ( domain_id, password , email)
VALUES
  ( '1', '{SHA512}.............................................................==', 'postmaster@example.com'),
  ( '1', '{SHA512}.............................................................==', 'supah_user@example.com');

Now, if you created your user according to the digital ocean document there is no password.  You’ll have to add one for your user like this:

ALTER ROLE mailuser WITH PASSWORD 'supersecretPaSSvvord';

That should be it for your direct Postgres editing. You can follow all the testing they show for MySQL since it looks like standard SQL.

Postfix

You can follow the Linode guide for postfix, only needing to make changes in step 8.  Just change in main.cf where they name their files by prepending mysql: to pgsql: and changing the filenames. Everything else there is the same.

  #Virtual domains, users, and aliases
  virtual_mailbox_domains = pgsql:/etc/postfix/pgsql-virtual-mailbox-domains.cf
  virtual_mailbox_maps = pgsql:/etc/postfix/pgsql-virtual-mailbox-maps.cf
  virtual_alias_maps = pgsql:/etc/postfix/pgsql-virtual-alias-maps.cf

Watch for these filenames further down as you’ll have to remember the change.  The actualy content of the files including the queries themselves from steps 11 on are the same as for MySQL.

The Linode document had a great line to test your db connection in step 15 and on.  Issue this from the shell:

postmap -q cmydomain.com pgsql:/etc/postfix/pgsql-virtual-mailbox-domains.cf

Dovecot

Keep in mind when working with dovecot configuration that if you don’t intend to use pop3, don’t enable it.

In step 25 specify pgsql instead of mysql in the driver line.

In step 27, Note in the dovecot-sql.conf.ext file your default_pass_scheme using the method above from the DigitalOcean document is SHA512,  not SHA512-CRYPT as in the Linode document.

Done!

That’s it! Everything else should be the same.  Follow the Linode guide to test your connections, make sure your firewall is open, and if you have any problems they have an excellent troubleshooting guide linked from the original article.  Enjoy!

Get Top URLs in SGUIL (with SecurityOnion Example)

Here’s a quick way to get a list of the top URLs used in a system that is monitoring traffic with SGUIL.  Login to the MySQL database and query the “event” table for signatures that have “URL” in them.  The example below is for SecurityOnion.

# In mysql, gets the top urls

# -A turns off "reading table information for completion of table and column names" for faster DB selection

mysql -uroot -A

use securityonion_db;

# Change date in the WHERE clause and number in LIMIT
# This query below retrieves the top 100 URLS after the date specified
SELECT event.signature,
count(*)
FROM event
WHERE event.timestamp > '2013-03-01'
AND event.signature LIKE '%URL%'
GROUP BY event.signature
ORDER By count(*) desc
LIMIT 100;

This is a good quick way to find out what people are requesting.  However, because so many CDN’s, ad servers and trackers use multiple hostnames, you don’t get the big picture of what is coming from just the domain itself.  This is great if you are retrieving statistics to help you tune performance on your ad blocker or web proxy cache.

Search All Zimbra Mailboxes (Community Edition)

Recently a phishing email got past our spam filter and we wanted to determine the extent to which users would be impacted.  Zimbra’s admin interface in the Community Edition doesn’t have the ability to search through all emails in a convenient way, so we started scouring the web for solutions.  That’s when I came across this post at the Zimbra forum that contained this code:

zmprov gaa |awk '{print "zmmailbox -z -m "$1" search <SEARCHSTRING>' |sh -v

However, this didn’t work for us, and you can see why in the code, there are some quotes and curly braces missing.  Here’s what worked for us on Zimbra Community Edition installed on Ubuntu Server.

Log in and su to the zimbra user.  If you don’t and you remain root, the paths to the binaries will probably be wrong and need to be specified as absolute paths from the root directory.

In the following command, change SEARCH STRING to the text you want to change.

zmprov -l gaa |awk '{print "zmmailbox -z -m "$1" search \"SEARCH STRING\" "}' |sh -v

Zmprov retrieves a list of all user mailboxes on your system, pipes that into awk which then creates the command which uses zmmailbox to search for the specified text in each of the mailboxes returned from zmprov, then pipes that into sh (shell) which executes the formatted command.

The only problem with this command is that it prints the command, along with a line that indicates the number of results returned – for every user.  So if the user didn’t have any results, you still get a line printed.  If they had results, a line indicating which email contained them is printed.  This could be improved by returning only pertinent results.  If you are executing this on command line make sure your buffer is large enough to store all results for the amount of users you have.

With approximately 1100 active accounts and 120GB of mailbox data, this command took about 3 hours to execute under normal daily load. The command itself did not appear to increase system load appreciably during execution.

Security Onion: Delete or Reset Snort and Trisul Data Directories

We are building a portable IDS that we take from location to location to assess different legs of the network.  The concept was to build the box, test it out in the office, configure and apply upgrades, take an image of it in case we needed to restore it, and then send it out into the wild.  The problem we were having is that we couldn’t image it easily because the amount of data we accrued during test in the office environment.  So we needed a way to reset the box to remove all sensor data.  We were primarily using snort and trisul network analytics on security onion.

Trisul

For trisul this is very simple.  Check the size of your trisul data directory first:

du -hs /nsm/trisul_data

Then you can reset it with:

cd /usr/local/share/trisul
./cleanenv -f -saveinit

You may need to supply additional arguments, particularly if you are working with contexts.  the “-saveinit” argument is important if you have defined any changes, but includes your interfaces and home networks, so whether or not you include this is up to you, particularly on a portable box.  Get more info on trisul cleanenv script.

Snort

As for snort, security onion makes sure disk use is below 90% with an hourly cron job, but if you need to delete all data, right now, so you can change networks or image a disk you are out of luck with that cron.  I’ve run the contents manually using:

/usr/local/sbin/nsm_sensor_clean --force-yes

That would recover some disk space, but not for our purposes.

According to the Security Onion FAQ, pcaps are stored in /nsm/sensor_data/NAME_OF_SENSOR/dailylogs/ and you can verify their disk usage with du -hs.  Ours was 293G.  You can delete these files by replacing NAME_OF_SENSOR with your sensor name and issuing the following command as root:

rm -rf /nsm/sensor_data/NAME_OF_SENSOR/dailylogs/*

MySQL

Something a little more tricky.  For whatever reason, security onion or snort stores data for each day and interface in their own set of tables.  This is a pain to clean by hand so don’t try.  First find out if this is an issue by logging in to mysl from shell using:

mysql -uroot

There is no root password.  You can find out the disk size of each of your MySQL databases using this:

SELECT
   table_schema, count(*) TABLES,
   concat(round(sum(table_rows)/1000000,2),'M')
   rows,concat(round(sum(data_length)/(1024*1024*1024),2),'G')
   DATA,concat(round(sum(index_length)/(1024*1024*1024),2),'G')
   idx,concat(round(sum(data_length+index_length)/(1024*1024*1024),2),'G')
   total_size,round(sum(index_length)/sum(data_length),2) idxfrac
FROM
   information_schema.TABLES group by table_schema;

I can see that my big databases are snorby and securityonion_db. You can find out which are the big tables if you like using this:

SELECT table_name,
           round(((data_length + index_length) / (1024*1024)),2) as "MBytes"
FROM information_schema.tables
WHERE table_schema = "securityonion_db";

That will give you some file sizes so we know the effects of our commands. Also, you can check the entire directory like so.

du -hs /var/lib/mysql

Mine was 1.4G – not acceptable for taking an image.  Now you can purge the sguil data from security onion using a provided script at /usr/local/bin/sguil-db-purge.  My suggestion to you is to copy this file into your home directory and call it just “db-purge” so you don’t get confused.

In order for this to work, you must find the lines:

DAYSTOKEEP=365
source /etc/nsm/securityonion.conf

….and change them to:

source /etc/nsm/securityonion.conf
DAYSTOKEEP=0

First, if you don’t change the order, you’ll cause the securityonion.conf script to override the variable.  Changing it to 0 will delete all the sguil archives. Now run that script:

/usr/local/bin/sguil-db-purge

That’s good.  It got my /var/lib/mysql down to 678M, but still have more work to do. There’s still a matter of snorby.  However for me this made up only about 250MB and I wasn’t able to get it to work so I left it.  But the suggestion on the list was to run the following command, but it just caused an error for me.

bundle exec rake snorby:hard_reset

That’s good enough for me.  You could keep on going if you want by moving into the /nsm directory and deleting logs for things like httpry. I’m prepared to move along at this point with a mostly fresh install, already configured, and ready for imaging.

Replacing Sendmail with Postfix on Ubuntu causes error postdrop: warning: unable to look up public/pickup: No such file or directory

On one of our machines the original Sysadmin had setup Sendmail, but on all our other machines the default version installed by Ubuntu is Postfix.  Rather than having to maintain and troubleshoot several type of systems, I wanted the sendmail one to be changed to match our Postfix systems.

The advice at this forum post which said to simply apt-get install postfix and it would automatically remove Sendmail.  Which it did.  Goes into the postfix screen and I set it up as per usual.  However, when testing it out from the command line by trying to send a logwatch report, I get this error:

postdrop: warning: unable to look up public/pickup: No such file or directory

Thanks to this article at databasically.com I found out that Ubuntu wasn’t removing sendmail completely, in fact it wasn’t stopping the sendmail process!  Here’s the solution that was posted:

mkfifo /var/spool/postfix/public/pickup
ps aux | grep mail
kill [insert process number]
sudo /etc/init.d/postfix restart

PHP APC config syntax causes [apc-error] apc_mmap: mmap failed: Invalid argument

After upgrading Ubuntu server from 9.10 to 10.04LTS PHP’s APC cache wasn’t functioning.  Apache wouldn’t start, it hung in the process list and printed this error to /var/log/apache2/error.log

[apc-error] apc_mmap: mmap failed: Invalid argument

The apache process would show up in the process like this:

apc@hostaname# ps aux | grep apache
www-data 6958 104 0.0 139044 3624 ? R 12:47 0:19 /usr/sbin/apache2 -k start

This process would then have to be killed, APC commented out, and then the web server restarted just to continue on without APC until a solution was found.

The PHP manual states this regarding MMAP support in APC:

http://php.net/manual/en/apc.configuration.php

When APC is compiled with mmap support (Memory Mapping), it will use only one memory segment, unlike when APC is built with SHM (SysV Shared Memory) support that uses multiple memory segments. MMAP does not have a maximum limit like SHM does in /proc/sys/kernel/shmmax. In general MMAP support is recommeded because it will reclaim the memory faster when the webserver is restarted and all in all reduces memory allocation impact at startup.

APC was made to run by commenting out all lines from the PHP config file except for:

extension=apc.so
apc.enabled = 1

This config can exist in a number of places. In 9.10 APC had been compiled by PECL so it was in our /etc/php5/apache2/php.ini file. However, in 10.04 APC is a package so we removed the PECL version, installed the version using apt-get install php-apc and moved the configuration to /etc/php5/conf.d/apc.ini for better consistency.

pear uninstall apc
apt-get install php-apc

As I began to uncomment lines one by one, it turned out the culprit was in the apc.shm_size directive. The default size is 30M, but as soon as the directive was uncommented it crashed Apache. I was unable to specify any value at all, even the same or lesser value. I even tried with quotes and removing quotes. That’s when I started thinking syntax may be a problem because it works when using the default value (shm_size commented out) but fails with an “invalid argument” error. That makes me think APC is sending an invalid argument to MMAP. In which case I find this post that confirms my suspicion.

http://stackoverflow.com/questions/6716929/apc-configuration-on-ubuntu-10-4-problem-with-apc-shm-size-apc-shm-segments-an

It turns out that the “M” for Megabytes cannot be specified in the shm_size directive for APC in Ubuntu server 10.04 because it is using APC version 3.1.3p1. However, on 9.10 APC wasn’t included as a package so it was installed with PEAR PECL which installed a more recent version of APC (3.1.9) which did allow specifying the “M” in the shm_size directive.

If you wish this to work in your config file, it should read like this in older versions of APC:

apc.shm_size = 100

This would specify 100M shared memory segments, and would be equivalent to this in newer versions:

apc.shm_size = 100M

And you can also put quotees around the “100M” if you like.

After these changes I had Apache up and running again, the APC cache helping PHP along, and some of the quickest loading pages I’ve seen in a while.

Mounting NFS causes error “mount: wrong fs type, bad option, bad superblock on …”

Some days the obvious just passes you by.

On a fresh install of Ubuntu server I found that trying to mount an NFS export from our NAS wouldn’t complete. I had the correct options in /etc/fstab on this machine and in /etc/exports on the NFS server as evidence by the success of other machines connected to it. Mounting it on this server gave the following error:

root@iknow:/var/log# mount /mnt/backup
mount: wrong fs type, bad option, bad superblock on 10.20.30.40:/mnt/my_backup,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount. helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so

After double checking everything several times the light bulb finally came on. nfs-common wasn’t installed!

sudo apt-get install nfs-common

It installed, I retried the mount, and it succeeded. Just remember that as a client, you have to have nfs-common installed if you want to mount an NFS export from the server.

MySQL ERROR 1396 (HY000): Operation CREATE USER failed

While creating a new user in MySQL today I accidentally pressed the return button while in the middle of entering a password via the command line client, which looked like this:


mysql> CREATE USER 'userguy'@'10.20.30.40' IDENTIFIED BY '
'> ;
'> '''';;
'> ';
Query OK, 0 rows affected (0.00 sec)

Which isn’t what I wanted. Terminating the command didn’t work and the command succeeded, not what I wanted. So I had to fix it.

I issued a

DROP USER

command and was greeted with:

mysql> drop user userguy;
ERROR 1396 (HY000): Operation DROP USER failed for 'userguy'@'%'

That didn’t work either. Instead I did this:

delete from mysql.user where User = 'userguy';
delete from mysql.db where User = 'userguy';

And when I tried to create the user again I was still met with ERROR 1396.

How to solve MySQL ERROR 1396 (HY000): Operation CREATE USER failed

Once those user entries are removed from the User and db tables, you just have to flush privileges:

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

Now when you enter your CREATE USER command (with the correct syntax and a valid single line password), it should succeed.

And stop working when you are tired!

Creating MySQL Users and Granting Permissions

When should you create a new user in MySQL? That’s a question that you have to ask whenever you add a new application or restructure your data. I’ve been in places where every application connects as root, and obviously that’s very bad.

Only you know your situation. When I build web applications sometimes I’ll have a shared database, such as geographic information or lists or musical artists, that I want multiple applications to have access to. But only read access, they can’t change it. So root is out. But they also have their own database (in MySQL) or schema (in PostGreSQL) in which they will need to INSERT, UPDATE, DELETE and SELECT from tables. They may even need to CREATE tables, such as temp tables, depending on the situation. In these cases, such applications would require a dedicated user, because their are no commonalities that would allow this user account with its privileges to be reused by another application.

In this guide I’ll only deal with command line actions from MySQL client, not CPanel actions, but you’ll be able to transfer the concept to the different interface.

To create users you’ll most likely have to be root:

mysql -u root -p

Create the new user for this application:

CREATE USER 'username'@'hostname' IDENTIFIED BY 'password';

You’ll have to think about this

hostname

setting. If the only one accessing your database with this username is your application, and your MySQL process is on your local machine, then ‘localhost’ will do as a hostname. If you have a separate machine running MySQL as I do, then you’ll need to enter the hostname or IP address of your web server machine, as that is where the application will connect from.

CREATE USER 'username'@'webserver' IDENTIFIED BY 'password';

It is possible to use wildcards for the hostname, although you should think this through as generally it is a bad idea. If you are on an internal network and you have your web server cluster and SQL machine on the same subnet, you could do this:

CREATE USER 'username'@'10.20.30.%' IDENTIFIED BY 'password';

Which will allow 256 machines to connect to your MySQL server as that user. Think about the future use of your network before you do this.

GRANTing PRIVILEGES

Now you have a user who can’t do much. You’ll need to give them access to something. Since the user I’m creating will be the one populating the database with information and not just reading it, they will need greater privileges than the average web application that just reads and reports. The GRANT syntax can be found in the MySQL manual. My specific requirement allowed me to enter:

mysql> GRANT ALTER, CREATE TEMPORARY TABLES, DELETE, INSERT, LOCK TABLES, SELECT, UPDATE ON dbname.* TO userguy@10.20.30.40

  • This GRANT is followed by a list of actions the user can take as seen in the manual page quoted above, (SELECT, DELETE, etc).
  • ON a certain database and tables – dbname.* means these privileges apply to ALL tables on the database called dbname.
  • TO username @ IP address – meaning if this user connects from another address they won’t receive these privileges

DON’T enter:
mysql> GRANT ALL ON *.* TO userguy@%

Which is the equivalent of giving them root privileges on all tables of all databases when connecting from anywhere. Pointless to try and manage users this way. And never grant *.* to a user anyway unless you aware of what all the * privileges are. Most people would be mortified to find out they used *.* and granted their web user the ability to DROP all their databases. Of course they don’t find out until after it has happened if they don’t know the user had that ability with *.* to start with.

Now, just to be clear everything is in effect, issue the

FLUSH PRIVILEGES

command.

Keep in mind, that in newer versions, MYSQL will create a user when a GRANT is issued if the user doesn’t already exist. However, I prefer to explicitly CREATE a user, and keep track of it in a file.

Connecting remotely requires configuration settings changes. If you are not connecting locally, you’ll need to modify your MySQL’s my.cnf file to allow remote connections. This is part of your extended configuration and security, and I won’t discuss it in this article.

Now you can go about creating users and granting privileges with a total security plan in mind. Everyone can access only what they need, no one can access or change what they shouldn’t, and users can only connect from specific locations.

Forcing Redirect to a www URL with Apache Rewrite for SEO

Redirecting requests for the non www version of your domain to the www version only can help to boost your SEO, eliminate duplicate search engine entries and improve your ranking. And it is easy to do with Apache and mod_rewrite.

I registered a new domain about a week ago and then posted my new social networking style project to go live. I was still fiddling with server upgrades so I didn’t tell anybody about it. Sure enough Google comes crawling and takes all pages. Excellent.

I went to Google and performed a:


site:example.com

…and it returned all my pages. Excellent again, fully indexed in less than a week (but not showing in search results). Wait a minute. I forgot the www. I try it again with:


site:www.example.com

And Google returns one page – the doorway for my web host! Sometime after I registered the domain and pointed the DNS to my machine Google crawled the www. version of the URL. I don’t know how or where it got the link. Probably from a list of recently registered domains. Then after that week was up and I posted the full site, Google crawled the non-www version of the URL, and indexed both.

Anyone see a problem there? Well first, I’ve got a doorway page for my host showing up for the www version of my URL! Second, if Google crawls both versions on their own without any links on the net, sooner or later they are going to see duplicate content. Both versions will eventually be the same I hope. Third, they are crawling the two versions at different rates, giving out of date results for one of them. Last, they are splitting up the results for the domain, and I don’t see that being good.

A solution that has been proposed by others in the past, and one that I would recommend again, is to redirect all traffic from non-www requests to the the www host version of your domain, if that is the version you are using. If you are using Apache and have mod_rewrite you can do the following:


<IfModule mod_rewrite.c>

RewriteEngine on
Rewritecond %{HTTP_HOST} !^www\.example\.com
RewriteRule ^/(.*) http://www.example.com/$1 [R=301,L]
</IfModule>

I did this in my http-vhosts.conf file but I believe it should work equally well in .htaccess . See note at the bottom of the page for the differences.

The first Rewritecond line says for every entry where the HTTP_HOST DOES NOT EQUAL www.example.com …. then redirect them to www.example.com. This won’t work if you subdomains like subdomain.example.com because it will redirect those requests to www.example.com as well.

Very important, don’t forget the R=301 which is a seamless HTTP redirect using the status code 301. According to W3.org:

301 Moved Permanently

The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs.

So Google SHOULD remove old references to the non-www domain and all your future results should look the way you like.

Proponents of SEO will tell you this automatic redirect to the www version of your domain using Apache and mod_rewrite is good for ranking, elimination of duplicate entries and boosting your visibility. Give it a try if you haven’t yet and see how it works for you.

UPDATE: If you put this in your root Apache configs instead of .htaccess make sure you write ^/(.*) instead of (.*) or you may get a double slash between your domain name and pages which Google will also consider duplicate content.