Compiling PHP on Linux causes configure: error: Cannot find ldap.h

configure: error: Cannot find ldap.h

You’ve probably seen this if you’ve tried compiling PHP on an Ubuntu Linux platform….or Red Hat….or Gentoo….or many other Linux related distributions.  In my research to solve this problem it seems like almost everyone has had this problem.

For me it began with this:

configure: error: Cannot find ldap libraries in /usr/include.

To remedy this I changed the LDAP line in configure (for 64-bit Ubuntu Server) to:

--with-ldap=/usr/lib/x86_64-linux-gnu

Which solved that problem but left me with:

configure: error: Cannot find ldap.h

After searching and searching I finally found some good soul who had the right combination of linking commands and solved it for me:

http://forum.directadmin.com/showthread.php?t=41922&p=211737#post211737

Although their links were for a different distribution, I adjusted them to work with Ubuntu Server 12.04 64-bit when compiling PHP 5.5:

ln -s /usr/lib/x86_64-linux-gnu/libldap_r.so /usr/lib/libldap_r.so
ln -s /usr/lib/x86_64-linux-gnu/libldap.so /usr/lib/libldap.so

Once you’ve done this you no longer need the –with-ldap=/usr/lib/x86_64-linux-gnu line, in fact it won’t work with it.  But if you use just –with-ldap with no arguments you may see this error:

/usr/bin/ld: ext/ldap/.libs/ldap.o: undefined reference to symbol 'ber_scanf@@OPENLDAP_2.4_2'
/usr/bin/ld: note: 'ber_scanf@@OPENLDAP_2.4_2' is defined in DSO /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2 so try adding it to the linker command line
/usr/lib/x86_64-linux-gnu/liblber-2.4.so.2: could not read symbols: Invalid operation
collect2: ld returned 1 exit status
make: *** [sapi/cli/php] Error 1

I finally settled on –with-ldap=shared and that allowed it to complete without error.

Hope this helps someone else.

Find Conficker Infected Machines with SGUIL

This command line query for mysql will grab for you a list of conficker infected machines for a given date range, their IP address, the count of events in the logs for each machine, and sort them by the biggest offenders. Distribution is SecurityOnion Linux.

# -A turns off "reading table information for completion of table and column names" for faster DB selection
 
mysql -uroot -A
 
use securityonion_db;

# change date range as needed
SELECT 
  INET_NTOA(event.src_ip),
  count(INET_NTOA(event.src_ip)) AS total
FROM event IGNORE INDEX (event_p_key, sid_time) 
WHERE event.timestamp > '2013-04-15' AND event.timestamp < '2013-04-16' 
AND  event.signature like '%Conficker%' 
GROUP BY INET_NTOA(event.src_ip)
ORDER BY INET_NTOA(event.src_ip) ASC

I was unable to get a list of host names of machines, which would have been nice when you have a large list of DHCP clients and aren’t looking at this query until many days after.

Another important query for security purposes is to obtain a list of IP addresses which the Conficker infected machines on your network are trying to contact. In this case I’m going to leave out the date range condition since I’m looking for the IP’s that have had the most activity of all time so I can ban them.  Network wide. For fun. Just ’cause I can.

SELECT 
  INET_NTOA(event.dst_ip),
  count(INET_NTOA(event.dst_ip)) AS total
FROM event IGNORE INDEX (event_p_key, sid_time) 
WHERE event.signature like '%Conficker%' 
GROUP BY INET_NTOA(event.dst_ip)
ORDER BY INET_NTOA(event.dst_ip) ASC

Get Top URLs in SGUIL (with SecurityOnion Example)

Here’s a quick way to get a list of the top URLs used in a system that is monitoring traffic with SGUIL.  Login to the MySQL database and query the “event” table for signatures that have “URL” in them.  The example below is for SecurityOnion.

# In mysql, gets the top urls

# -A turns off "reading table information for completion of table and column names" for faster DB selection

mysql -uroot -A

use securityonion_db;

# Change date in the WHERE clause and number in LIMIT
# This query below retrieves the top 100 URLS after the date specified
SELECT event.signature,
count(*)
FROM event
WHERE event.timestamp > '2013-03-01'
AND event.signature LIKE '%URL%'
GROUP BY event.signature
ORDER By count(*) desc
LIMIT 100;

This is a good quick way to find out what people are requesting.  However, because so many CDN’s, ad servers and trackers use multiple hostnames, you don’t get the big picture of what is coming from just the domain itself.  This is great if you are retrieving statistics to help you tune performance on your ad blocker or web proxy cache.

Search All Zimbra Mailboxes (Community Edition)

Recently a phishing email got past our spam filter and we wanted to determine the extent to which users would be impacted.  Zimbra’s admin interface in the Community Edition doesn’t have the ability to search through all emails in a convenient way, so we started scouring the web for solutions.  That’s when I came across this post at the Zimbra forum that contained this code:

zmprov gaa |awk '{print "zmmailbox -z -m "$1" search <SEARCHSTRING>' |sh -v

However, this didn’t work for us, and you can see why in the code, there are some quotes and curly braces missing.  Here’s what worked for us on Zimbra Community Edition installed on Ubuntu Server.

Log in and su to the zimbra user.  If you don’t and you remain root, the paths to the binaries will probably be wrong and need to be specified as absolute paths from the root directory.

In the following command, change SEARCH STRING to the text you want to change.

zmprov -l gaa |awk '{print "zmmailbox -z -m "$1" search \"SEARCH STRING\" "}' |sh -v

Zmprov retrieves a list of all user mailboxes on your system, pipes that into awk which then creates the command which uses zmmailbox to search for the specified text in each of the mailboxes returned from zmprov, then pipes that into sh (shell) which executes the formatted command.

The only problem with this command is that it prints the command, along with a line that indicates the number of results returned – for every user.  So if the user didn’t have any results, you still get a line printed.  If they had results, a line indicating which email contained them is printed.  This could be improved by returning only pertinent results.  If you are executing this on command line make sure your buffer is large enough to store all results for the amount of users you have.

With approximately 1100 active accounts and 120GB of mailbox data, this command took about 3 hours to execute under normal daily load. The command itself did not appear to increase system load appreciably during execution.

Security Onion: Delete or Reset Snort and Trisul Data Directories

We are building a portable IDS that we take from location to location to assess different legs of the network.  The concept was to build the box, test it out in the office, configure and apply upgrades, take an image of it in case we needed to restore it, and then send it out into the wild.  The problem we were having is that we couldn’t image it easily because the amount of data we accrued during test in the office environment.  So we needed a way to reset the box to remove all sensor data.  We were primarily using snort and trisul network analytics on security onion.

Trisul

For trisul this is very simple.  Check the size of your trisul data directory first:

du -hs /nsm/trisul_data

Then you can reset it with:

cd /usr/local/share/trisul
./cleanenv -f -saveinit

You may need to supply additional arguments, particularly if you are working with contexts.  the “-saveinit” argument is important if you have defined any changes, but includes your interfaces and home networks, so whether or not you include this is up to you, particularly on a portable box.  Get more info on trisul cleanenv script.

Snort

As for snort, security onion makes sure disk use is below 90% with an hourly cron job, but if you need to delete all data, right now, so you can change networks or image a disk you are out of luck with that cron.  I’ve run the contents manually using:

/usr/local/sbin/nsm_sensor_clean --force-yes

That would recover some disk space, but not for our purposes.

According to the Security Onion FAQ, pcaps are stored in /nsm/sensor_data/NAME_OF_SENSOR/dailylogs/ and you can verify their disk usage with du -hs.  Ours was 293G.  You can delete these files by replacing NAME_OF_SENSOR with your sensor name and issuing the following command as root:

rm -rf /nsm/sensor_data/NAME_OF_SENSOR/dailylogs/*

MySQL

Something a little more tricky.  For whatever reason, security onion or snort stores data for each day and interface in their own set of tables.  This is a pain to clean by hand so don’t try.  First find out if this is an issue by logging in to mysl from shell using:

mysql -uroot

There is no root password.  You can find out the disk size of each of your MySQL databases using this:

SELECT
   table_schema, count(*) TABLES,
   concat(round(sum(table_rows)/1000000,2),'M')
   rows,concat(round(sum(data_length)/(1024*1024*1024),2),'G')
   DATA,concat(round(sum(index_length)/(1024*1024*1024),2),'G')
   idx,concat(round(sum(data_length+index_length)/(1024*1024*1024),2),'G')
   total_size,round(sum(index_length)/sum(data_length),2) idxfrac
FROM
   information_schema.TABLES group by table_schema;

I can see that my big databases are snorby and securityonion_db. You can find out which are the big tables if you like using this:

SELECT table_name,
           round(((data_length + index_length) / (1024*1024)),2) as "MBytes"
FROM information_schema.tables
WHERE table_schema = "securityonion_db";

That will give you some file sizes so we know the effects of our commands. Also, you can check the entire directory like so.

du -hs /var/lib/mysql

Mine was 1.4G – not acceptable for taking an image.  Now you can purge the sguil data from security onion using a provided script at /usr/local/bin/sguil-db-purge.  My suggestion to you is to copy this file into your home directory and call it just “db-purge” so you don’t get confused.

In order for this to work, you must find the lines:

DAYSTOKEEP=365
source /etc/nsm/securityonion.conf

….and change them to:

source /etc/nsm/securityonion.conf
DAYSTOKEEP=0

First, if you don’t change the order, you’ll cause the securityonion.conf script to override the variable.  Changing it to 0 will delete all the sguil archives. Now run that script:

/usr/local/bin/sguil-db-purge

That’s good.  It got my /var/lib/mysql down to 678M, but still have more work to do. There’s still a matter of snorby.  However for me this made up only about 250MB and I wasn’t able to get it to work so I left it.  But the suggestion on the list was to run the following command, but it just caused an error for me.

bundle exec rake snorby:hard_reset

That’s good enough for me.  You could keep on going if you want by moving into the /nsm directory and deleting logs for things like httpry. I’m prepared to move along at this point with a mostly fresh install, already configured, and ready for imaging.

MySQL too many connections – solution

When greeted with a MySQL error this morning stating that a database connection could not be made from our PHP web application I had to do some testing.  First, try connecting to the database.  I did this from a remote host and was thrown back an “Error: too many connections.”

From there you should login as root from the localhost and issue:

SHOW FULL PROCESSLIST;

This will give a list of the connections. Take a good look at the big offenders because you’ll need to troubleshoot that application later and find out why it has so many connections.  Now that you know the problem, restart your MySQL and connections should resume as normal (with the excessive stale connections released) and you can go about the repair.

Also, in your my.cnf you can reduce the wait_timeout which will reduce the amount of time a connection can be held open by an application. The default is 28800 seconds, or 8 hours. I reduced mine to 30 minutes. You could go less but we have some apps that do their own connection pooling and don’t want to mess that up. In my.cnf:

wait_timeout = 1800

If you are absolutely sure you need more connections, you can increase this from the default 151 in your my.cnf as well:

# default max_connections is 151 (1 spare for super user)
max_connections = 201

Now you need to keep an eye on the situation by issuing the SHOW FULL PROCESSLIST once in a while to see if they are filling up fast. And don’t forget to visit any applications that may be showing up in there frequently.  I found two applications tha were using persistent connections, being held open for two hours, and they just didn’t need to.

Replacing Sendmail with Postfix on Ubuntu causes error postdrop: warning: unable to look up public/pickup: No such file or directory

On one of our machines the original Sysadmin had setup Sendmail, but on all our other machines the default version installed by Ubuntu is Postfix.  Rather than having to maintain and troubleshoot several type of systems, I wanted the sendmail one to be changed to match our Postfix systems.

The advice at this forum post which said to simply apt-get install postfix and it would automatically remove Sendmail.  Which it did.  Goes into the postfix screen and I set it up as per usual.  However, when testing it out from the command line by trying to send a logwatch report, I get this error:

postdrop: warning: unable to look up public/pickup: No such file or directory

Thanks to this article at databasically.com I found out that Ubuntu wasn’t removing sendmail completely, in fact it wasn’t stopping the sendmail process!  Here’s the solution that was posted:

mkfifo /var/spool/postfix/public/pickup
ps aux | grep mail
kill [insert process number]
sudo /etc/init.d/postfix restart

PHP APC config syntax causes [apc-error] apc_mmap: mmap failed: Invalid argument

After upgrading Ubuntu server from 9.10 to 10.04LTS PHP’s APC cache wasn’t functioning.  Apache wouldn’t start, it hung in the process list and printed this error to /var/log/apache2/error.log

[apc-error] apc_mmap: mmap failed: Invalid argument

The apache process would show up in the process like this:

apc@hostaname# ps aux | grep apache
www-data 6958 104 0.0 139044 3624 ? R 12:47 0:19 /usr/sbin/apache2 -k start

This process would then have to be killed, APC commented out, and then the web server restarted just to continue on without APC until a solution was found.

The PHP manual states this regarding MMAP support in APC:

http://php.net/manual/en/apc.configuration.php

When APC is compiled with mmap support (Memory Mapping), it will use only one memory segment, unlike when APC is built with SHM (SysV Shared Memory) support that uses multiple memory segments. MMAP does not have a maximum limit like SHM does in /proc/sys/kernel/shmmax. In general MMAP support is recommeded because it will reclaim the memory faster when the webserver is restarted and all in all reduces memory allocation impact at startup.

APC was made to run by commenting out all lines from the PHP config file except for:

extension=apc.so
apc.enabled = 1

This config can exist in a number of places. In 9.10 APC had been compiled by PECL so it was in our /etc/php5/apache2/php.ini file. However, in 10.04 APC is a package so we removed the PECL version, installed the version using apt-get install php-apc and moved the configuration to /etc/php5/conf.d/apc.ini for better consistency.

pear uninstall apc
apt-get install php-apc

As I began to uncomment lines one by one, it turned out the culprit was in the apc.shm_size directive. The default size is 30M, but as soon as the directive was uncommented it crashed Apache. I was unable to specify any value at all, even the same or lesser value. I even tried with quotes and removing quotes. That’s when I started thinking syntax may be a problem because it works when using the default value (shm_size commented out) but fails with an “invalid argument” error. That makes me think APC is sending an invalid argument to MMAP. In which case I find this post that confirms my suspicion.

http://stackoverflow.com/questions/6716929/apc-configuration-on-ubuntu-10-4-problem-with-apc-shm-size-apc-shm-segments-an

It turns out that the “M” for Megabytes cannot be specified in the shm_size directive for APC in Ubuntu server 10.04 because it is using APC version 3.1.3p1. However, on 9.10 APC wasn’t included as a package so it was installed with PEAR PECL which installed a more recent version of APC (3.1.9) which did allow specifying the “M” in the shm_size directive.

If you wish this to work in your config file, it should read like this in older versions of APC:

apc.shm_size = 100

This would specify 100M shared memory segments, and would be equivalent to this in newer versions:

apc.shm_size = 100M

And you can also put quotees around the “100M” if you like.

After these changes I had Apache up and running again, the APC cache helping PHP along, and some of the quickest loading pages I’ve seen in a while.

How To Disable WordPress Plugins Manually

If you get stuck being unable to login to WordPress because of a misbehaving plugin, you have a couple options.

You can login to your WordPress database using command line MySQL client or phpMyAdmin, and from there look for the wp_options table. If you are using a security plugin that changes table prefixes you’ll have to replace wp_ with your prefix. Once there, look for a value in the option_name column called active_plugins. You can find it like this:

SELECT * FROM wp_options WHERE option_name like '%plugin%';

This will give you a list of plugin options which you can examine. If you are using WPMU or WP3 Multisite and this is a globally enabled plugin, you’ll need to then use table name with the ID of the site in the prefix, like wp_2_options instead of wp_options. It used to be that option_name “active_plugins” would contain all plugins, but it appears from examining this that option_name “transient_plugin_slugs” contains plugins. If you choose to modify it, copy and paste the original value somewhere for easy editing and replacement once fixed.

The option I chose for this Multisite plugin that prevented login was to simply rename the folder of it in the plugins directory. That would cause WordPress not to load the bad files, because they were no longer in the expected location.

Once into the admin dashboard page, you can then disable the plugin completely and take whatever actions you need to repair, replace or trash it for good.

Mysql Client Switch for Vertical Output

Did you ever need to examine one line from a MySQL table and were unable to line up the column headers with the data because the line was too long?

Well no fears, it turns out the MySQL client has a switch (\G) you can use to arrange output vertically for easier reading. This is very handy when you are looking at only one line. Here’s an example:


mysql> SELECT * FROM emp_vw WHERE id = 10110 LIMIT 1 \G
*************************** 1. row ***************************
id: 10110
company_id: 21100
last_name: Stall
first_name: John
gender: MALE
email: test@example.com
address: 277 142nd Ave
city: New York
state: NY
phone: (888)555-1212
birthday: 1963-05-12
active: 1
timestamp: 2011-11-30 18:58:50
1 row in set (0.01 sec)

This is a great way to examine the content of the fields of any table for the purposes of investigation. It works very well for views and tables with many columns that form long lines when displayed horizontally. This can also be applied to your my.cnf file, but I prefer to leave it off so it can be enable with the MySQL client \G switch only when needed.