DW Questions Answers WordPress Plugin and Multisite

Although the DWQA Questions and Answers plugin for WordPress has been updated to work with Multisite, it probably won’t work as expected.  At the time of this writing, the problem appears to be new user creation and the multisite default functionality.  In order to accept the DWQA default method of answering questions, which is that only registered users can post answers to questions asked by anonymous users, you have to allow them to register.  First, on my Multisite subsite I had no option to allow user registration.  Even after enabling it at my main blog as network administrator it was not available under the subsite options.

Other users have reported that once you are able to allow registration on a subsite, that clicking the link will redirect the user to the network main site and allow them to register there, which is probably not the desired action for a DW Questions and Answers site.  It also allows that user to be login to all other sites on the network.

What would be desired here is either a separate login that is not tied to the wordpress registered users login, or the use of social apis so users could use an existing social account to login for posting and answering questions.

WordPress Automatic Downloads and Plugin Updates Permissions

Going crazy following posts on the Internet trying to set permissions on your WordPress directory to get automatic plugin downloads and updates working?  So was I.  Several times on several sites.  Trying to keep security in mind I didn’t want to give too much.  After applying the DIRECT method to the wp-config.php file I had to do this to get plugins to download, create the directory and install.  Keep in mind all permissions were the default except for this:

chmod 775 wp-content/
chgrp nobody wp-content/
chgrp nobody -R wp-content/plugins/

‘Nobody’ should be the user that the apache process runs under. You do not need to set permissions to 777.

Postfix, Dovecot and PostGreSQL Setup

In trying to setup a simple mail server on Ubuntu for a couple of small virtual domains with limited users I came across a couple excellent articles, but they were written for use with MySQL.  I have a machine with limited resources running custom software using PostGreSQL.  Although flat files would have been acceptable, I didn’t have the same luck in breadth of documentation.  So, I decided to work with the excellent articles I found and adapt them to Postgres for my server.

The best guide I found was from Linode, and is geared to help customers setting up Email with Postfix, Dovecot, and MySQL on their excellent platform.  The second guide I found left a lot to be desired, but since it was Postgres specific I was able to take those pieces from it and adapt it to the Linode guide.  That guide was called How to setup a postfix email server with Dovecot and published as a community tutorial from DigitalOcean.  While DigitalOcean and Linode are tight competitors, Linode wins out for having the better mail server how-to.

I’m going to mainly show what changes need to be done to the Linode guide in order to make it work with Postgres.  Otherwise use that guide for complete instructions. I’ll assume you already have a Postgres server up and running on that machine.

Postgres – Creating the Database

Your first change will be under the “Creating the Database” heading.  Let’s start by using the DigitalOcean guide to create the user.  Let’s try and keep the Linode naming convention as much as possible though.

First follow the adjustments to the  /etc/postgresql/vers.ion/main/pg_ident.conf  file.

mailmap         dovecot                 mailuser
mailmap         postfix                 mailuser
mailmap         root                    mailuser

Reload postgres and setup the user.

# sudo -u postgres psql
postgres=# CREATE USER mailuser;
postgres=# REVOKE CREATE ON SCHEMA public FROM PUBLIC;
postgres=# REVOKE USAGE ON SCHEMA public FROM PUBLIC;
postgres=# GRANT CREATE ON SCHEMA public TO postgres;
postgres=# GRANT USAGE ON SCHEMA public TO postgres;
postgres=# CREATE DATABASE mailserver WITH OWNER mailuser;
postgres=# \q 
# sudo psql -U mailuser -d mailserver
postgres=# \c mailserver
CREATE SEQUENCE seq_mail_domain_id START 1;
CREATE SEQUENCE seq_mail_user_id START 1;
CREATE SEQUENCE seq_mail_alias_id START 1;

CREATE TABLE virtual_domains (
  domain_id INT2 NOT NULL DEFAULT nextval('seq_mail_domain_id'),
  domain_name varchar(50) NOT NULL,
  PRIMARY KEY (domain_id)
);


CREATE TABLE virtual_users (
  user_id INT2 NOT NULL DEFAULT nextval('seq_mail_user_id'),
  domain_id INT2 NOT NULL,
  password varchar(106) NOT NULL,
  email varchar(100) NOT NULL,
  PRIMARY KEY (user_id),
  FOREIGN KEY (domain_id) REFERENCES virtual_domains(domain_id) ON DELETE CASCADE
);

CREATE TABLE virtual_aliases (
  alias_id INT2 NOT NULL DEFAULT nextval('seq_mail_alias_id'),
  domain_id INT2 NOT NULL,
  source varchar(100) NOT NULL,
  destination varchar(100) NOT NULL,
  PRIMARY KEY (alias_id),
  FOREIGN KEY (domain_id) REFERENCES virtual_domains(domain_id) ON DELETE CASCADE
);

To add your domains to the database:

INSERT INTO virtual_domains (domain_name)
VALUES
  ('example.com'),
  ('hostname.example.com'),
  ('hostname'),
  ('localhost.example.com');

Do a simple select from the virtual_domains table and note the id of the domain you wish to add addresses for:

SELECT * FROM virtual_domains;

According to the postgres document we referenced earlier, you have to get the password from command line like so:

# doveadm pw -s sha512 -r 100
Enter new password: ...
Retype new password: ...
{SHA512}.............................................................==

Now you need to add some email addresses like so:

INSERT INTO virtual_users
  ( domain_id, password , email)
VALUES
  ( '1', '{SHA512}.............................................................==', 'postmaster@example.com'),
  ( '1', '{SHA512}.............................................................==', 'supah_user@example.com');

Now, if you created your user according to the digital ocean document there is no password.  You’ll have to add one for your user like this:

ALTER ROLE mailuser WITH PASSWORD 'supersecretPaSSvvord';

That should be it for your direct Postgres editing. You can follow all the testing they show for MySQL since it looks like standard SQL.

Postfix

You can follow the Linode guide for postfix, only needing to make changes in step 8.  Just change in main.cf where they name their files by prepending mysql: to pgsql: and changing the filenames. Everything else there is the same.

  #Virtual domains, users, and aliases
  virtual_mailbox_domains = pgsql:/etc/postfix/pgsql-virtual-mailbox-domains.cf
  virtual_mailbox_maps = pgsql:/etc/postfix/pgsql-virtual-mailbox-maps.cf
  virtual_alias_maps = pgsql:/etc/postfix/pgsql-virtual-alias-maps.cf

Watch for these filenames further down as you’ll have to remember the change.  The actualy content of the files including the queries themselves from steps 11 on are the same as for MySQL.

The Linode document had a great line to test your db connection in step 15 and on.  Issue this from the shell:

postmap -q cmydomain.com pgsql:/etc/postfix/pgsql-virtual-mailbox-domains.cf

Dovecot

Keep in mind when working with dovecot configuration that if you don’t intend to use pop3, don’t enable it.

In step 25 specify pgsql instead of mysql in the driver line.

In step 27, Note in the dovecot-sql.conf.ext file your default_pass_scheme using the method above from the DigitalOcean document is SHA512,  not SHA512-CRYPT as in the Linode document.

Done!

That’s it! Everything else should be the same.  Follow the Linode guide to test your connections, make sure your firewall is open, and if you have any problems they have an excellent troubleshooting guide linked from the original article.  Enjoy!

Find Conficker Infected Machines with SGUIL

This command line query for mysql will grab for you a list of conficker infected machines for a given date range, their IP address, the count of events in the logs for each machine, and sort them by the biggest offenders. Distribution is SecurityOnion Linux.

# -A turns off "reading table information for completion of table and column names" for faster DB selection
 
mysql -uroot -A
 
use securityonion_db;

# change date range as needed
SELECT 
  INET_NTOA(event.src_ip),
  count(INET_NTOA(event.src_ip)) AS total
FROM event IGNORE INDEX (event_p_key, sid_time) 
WHERE event.timestamp > '2013-04-15' AND event.timestamp < '2013-04-16' 
AND  event.signature like '%Conficker%' 
GROUP BY INET_NTOA(event.src_ip)
ORDER BY INET_NTOA(event.src_ip) ASC

I was unable to get a list of host names of machines, which would have been nice when you have a large list of DHCP clients and aren’t looking at this query until many days after.

Another important query for security purposes is to obtain a list of IP addresses which the Conficker infected machines on your network are trying to contact. In this case I’m going to leave out the date range condition since I’m looking for the IP’s that have had the most activity of all time so I can ban them.  Network wide. For fun. Just ’cause I can.

SELECT 
  INET_NTOA(event.dst_ip),
  count(INET_NTOA(event.dst_ip)) AS total
FROM event IGNORE INDEX (event_p_key, sid_time) 
WHERE event.signature like '%Conficker%' 
GROUP BY INET_NTOA(event.dst_ip)
ORDER BY INET_NTOA(event.dst_ip) ASC

Get Top URLs in SGUIL (with SecurityOnion Example)

Here’s a quick way to get a list of the top URLs used in a system that is monitoring traffic with SGUIL.  Login to the MySQL database and query the “event” table for signatures that have “URL” in them.  The example below is for SecurityOnion.

# In mysql, gets the top urls

# -A turns off "reading table information for completion of table and column names" for faster DB selection

mysql -uroot -A

use securityonion_db;

# Change date in the WHERE clause and number in LIMIT
# This query below retrieves the top 100 URLS after the date specified
SELECT event.signature,
count(*)
FROM event
WHERE event.timestamp > '2013-03-01'
AND event.signature LIKE '%URL%'
GROUP BY event.signature
ORDER By count(*) desc
LIMIT 100;

This is a good quick way to find out what people are requesting.  However, because so many CDN’s, ad servers and trackers use multiple hostnames, you don’t get the big picture of what is coming from just the domain itself.  This is great if you are retrieving statistics to help you tune performance on your ad blocker or web proxy cache.

Search All Zimbra Mailboxes (Community Edition)

Recently a phishing email got past our spam filter and we wanted to determine the extent to which users would be impacted.  Zimbra’s admin interface in the Community Edition doesn’t have the ability to search through all emails in a convenient way, so we started scouring the web for solutions.  That’s when I came across this post at the Zimbra forum that contained this code:

zmprov gaa |awk '{print "zmmailbox -z -m "$1" search <SEARCHSTRING>' |sh -v

However, this didn’t work for us, and you can see why in the code, there are some quotes and curly braces missing.  Here’s what worked for us on Zimbra Community Edition installed on Ubuntu Server.

Log in and su to the zimbra user.  If you don’t and you remain root, the paths to the binaries will probably be wrong and need to be specified as absolute paths from the root directory.

In the following command, change SEARCH STRING to the text you want to change.

zmprov -l gaa |awk '{print "zmmailbox -z -m "$1" search \"SEARCH STRING\" "}' |sh -v

Zmprov retrieves a list of all user mailboxes on your system, pipes that into awk which then creates the command which uses zmmailbox to search for the specified text in each of the mailboxes returned from zmprov, then pipes that into sh (shell) which executes the formatted command.

The only problem with this command is that it prints the command, along with a line that indicates the number of results returned – for every user.  So if the user didn’t have any results, you still get a line printed.  If they had results, a line indicating which email contained them is printed.  This could be improved by returning only pertinent results.  If you are executing this on command line make sure your buffer is large enough to store all results for the amount of users you have.

With approximately 1100 active accounts and 120GB of mailbox data, this command took about 3 hours to execute under normal daily load. The command itself did not appear to increase system load appreciably during execution.

Security Onion: Delete or Reset Snort and Trisul Data Directories

We are building a portable IDS that we take from location to location to assess different legs of the network.  The concept was to build the box, test it out in the office, configure and apply upgrades, take an image of it in case we needed to restore it, and then send it out into the wild.  The problem we were having is that we couldn’t image it easily because the amount of data we accrued during test in the office environment.  So we needed a way to reset the box to remove all sensor data.  We were primarily using snort and trisul network analytics on security onion.

Trisul

For trisul this is very simple.  Check the size of your trisul data directory first:

du -hs /nsm/trisul_data

Then you can reset it with:

cd /usr/local/share/trisul
./cleanenv -f -saveinit

You may need to supply additional arguments, particularly if you are working with contexts.  the “-saveinit” argument is important if you have defined any changes, but includes your interfaces and home networks, so whether or not you include this is up to you, particularly on a portable box.  Get more info on trisul cleanenv script.

Snort

As for snort, security onion makes sure disk use is below 90% with an hourly cron job, but if you need to delete all data, right now, so you can change networks or image a disk you are out of luck with that cron.  I’ve run the contents manually using:

/usr/local/sbin/nsm_sensor_clean --force-yes

That would recover some disk space, but not for our purposes.

According to the Security Onion FAQ, pcaps are stored in /nsm/sensor_data/NAME_OF_SENSOR/dailylogs/ and you can verify their disk usage with du -hs.  Ours was 293G.  You can delete these files by replacing NAME_OF_SENSOR with your sensor name and issuing the following command as root:

rm -rf /nsm/sensor_data/NAME_OF_SENSOR/dailylogs/*

MySQL

Something a little more tricky.  For whatever reason, security onion or snort stores data for each day and interface in their own set of tables.  This is a pain to clean by hand so don’t try.  First find out if this is an issue by logging in to mysl from shell using:

mysql -uroot

There is no root password.  You can find out the disk size of each of your MySQL databases using this:

SELECT
   table_schema, count(*) TABLES,
   concat(round(sum(table_rows)/1000000,2),'M')
   rows,concat(round(sum(data_length)/(1024*1024*1024),2),'G')
   DATA,concat(round(sum(index_length)/(1024*1024*1024),2),'G')
   idx,concat(round(sum(data_length+index_length)/(1024*1024*1024),2),'G')
   total_size,round(sum(index_length)/sum(data_length),2) idxfrac
FROM
   information_schema.TABLES group by table_schema;

I can see that my big databases are snorby and securityonion_db. You can find out which are the big tables if you like using this:

SELECT table_name,
           round(((data_length + index_length) / (1024*1024)),2) as "MBytes"
FROM information_schema.tables
WHERE table_schema = "securityonion_db";

That will give you some file sizes so we know the effects of our commands. Also, you can check the entire directory like so.

du -hs /var/lib/mysql

Mine was 1.4G – not acceptable for taking an image.  Now you can purge the sguil data from security onion using a provided script at /usr/local/bin/sguil-db-purge.  My suggestion to you is to copy this file into your home directory and call it just “db-purge” so you don’t get confused.

In order for this to work, you must find the lines:

DAYSTOKEEP=365
source /etc/nsm/securityonion.conf

….and change them to:

source /etc/nsm/securityonion.conf
DAYSTOKEEP=0

First, if you don’t change the order, you’ll cause the securityonion.conf script to override the variable.  Changing it to 0 will delete all the sguil archives. Now run that script:

/usr/local/bin/sguil-db-purge

That’s good.  It got my /var/lib/mysql down to 678M, but still have more work to do. There’s still a matter of snorby.  However for me this made up only about 250MB and I wasn’t able to get it to work so I left it.  But the suggestion on the list was to run the following command, but it just caused an error for me.

bundle exec rake snorby:hard_reset

That’s good enough for me.  You could keep on going if you want by moving into the /nsm directory and deleting logs for things like httpry. I’m prepared to move along at this point with a mostly fresh install, already configured, and ready for imaging.

MySQL too many connections – solution

When greeted with a MySQL error this morning stating that a database connection could not be made from our PHP web application I had to do some testing.  First, try connecting to the database.  I did this from a remote host and was thrown back an “Error: too many connections.”

From there you should login as root from the localhost and issue:

SHOW FULL PROCESSLIST;

This will give a list of the connections. Take a good look at the big offenders because you’ll need to troubleshoot that application later and find out why it has so many connections.  Now that you know the problem, restart your MySQL and connections should resume as normal (with the excessive stale connections released) and you can go about the repair.

Also, in your my.cnf you can reduce the wait_timeout which will reduce the amount of time a connection can be held open by an application. The default is 28800 seconds, or 8 hours. I reduced mine to 30 minutes. You could go less but we have some apps that do their own connection pooling and don’t want to mess that up. In my.cnf:

wait_timeout = 1800

If you are absolutely sure you need more connections, you can increase this from the default 151 in your my.cnf as well:

# default max_connections is 151 (1 spare for super user)
max_connections = 201

Now you need to keep an eye on the situation by issuing the SHOW FULL PROCESSLIST once in a while to see if they are filling up fast. And don’t forget to visit any applications that may be showing up in there frequently.  I found two applications tha were using persistent connections, being held open for two hours, and they just didn’t need to.

Replacing Sendmail with Postfix on Ubuntu causes error postdrop: warning: unable to look up public/pickup: No such file or directory

On one of our machines the original Sysadmin had setup Sendmail, but on all our other machines the default version installed by Ubuntu is Postfix.  Rather than having to maintain and troubleshoot several type of systems, I wanted the sendmail one to be changed to match our Postfix systems.

The advice at this forum post which said to simply apt-get install postfix and it would automatically remove Sendmail.  Which it did.  Goes into the postfix screen and I set it up as per usual.  However, when testing it out from the command line by trying to send a logwatch report, I get this error:

postdrop: warning: unable to look up public/pickup: No such file or directory

Thanks to this article at databasically.com I found out that Ubuntu wasn’t removing sendmail completely, in fact it wasn’t stopping the sendmail process!  Here’s the solution that was posted:

mkfifo /var/spool/postfix/public/pickup
ps aux | grep mail
kill [insert process number]
sudo /etc/init.d/postfix restart

PHP APC config syntax causes [apc-error] apc_mmap: mmap failed: Invalid argument

After upgrading Ubuntu server from 9.10 to 10.04LTS PHP’s APC cache wasn’t functioning.  Apache wouldn’t start, it hung in the process list and printed this error to /var/log/apache2/error.log

[apc-error] apc_mmap: mmap failed: Invalid argument

The apache process would show up in the process like this:

apc@hostaname# ps aux | grep apache
www-data 6958 104 0.0 139044 3624 ? R 12:47 0:19 /usr/sbin/apache2 -k start

This process would then have to be killed, APC commented out, and then the web server restarted just to continue on without APC until a solution was found.

The PHP manual states this regarding MMAP support in APC:

http://php.net/manual/en/apc.configuration.php

When APC is compiled with mmap support (Memory Mapping), it will use only one memory segment, unlike when APC is built with SHM (SysV Shared Memory) support that uses multiple memory segments. MMAP does not have a maximum limit like SHM does in /proc/sys/kernel/shmmax. In general MMAP support is recommeded because it will reclaim the memory faster when the webserver is restarted and all in all reduces memory allocation impact at startup.

APC was made to run by commenting out all lines from the PHP config file except for:

extension=apc.so
apc.enabled = 1

This config can exist in a number of places. In 9.10 APC had been compiled by PECL so it was in our /etc/php5/apache2/php.ini file. However, in 10.04 APC is a package so we removed the PECL version, installed the version using apt-get install php-apc and moved the configuration to /etc/php5/conf.d/apc.ini for better consistency.

pear uninstall apc
apt-get install php-apc

As I began to uncomment lines one by one, it turned out the culprit was in the apc.shm_size directive. The default size is 30M, but as soon as the directive was uncommented it crashed Apache. I was unable to specify any value at all, even the same or lesser value. I even tried with quotes and removing quotes. That’s when I started thinking syntax may be a problem because it works when using the default value (shm_size commented out) but fails with an “invalid argument” error. That makes me think APC is sending an invalid argument to MMAP. In which case I find this post that confirms my suspicion.

http://stackoverflow.com/questions/6716929/apc-configuration-on-ubuntu-10-4-problem-with-apc-shm-size-apc-shm-segments-an

It turns out that the “M” for Megabytes cannot be specified in the shm_size directive for APC in Ubuntu server 10.04 because it is using APC version 3.1.3p1. However, on 9.10 APC wasn’t included as a package so it was installed with PEAR PECL which installed a more recent version of APC (3.1.9) which did allow specifying the “M” in the shm_size directive.

If you wish this to work in your config file, it should read like this in older versions of APC:

apc.shm_size = 100

This would specify 100M shared memory segments, and would be equivalent to this in newer versions:

apc.shm_size = 100M

And you can also put quotees around the “100M” if you like.

After these changes I had Apache up and running again, the APC cache helping PHP along, and some of the quickest loading pages I’ve seen in a while.