Skip navigation

WordPress and security are not the best of friends, but if you’re going to be dragged over the coals by Ivan you might as well make him work for it. Fail2Ban is a great little service to help stall brute force attempts against SSH and similar auth methods, it can also be used to monitor and block persistent failed authentications against WordPress and Webmin. Since wordpress does not automatically log failed login attempts, a simple plugin is required to provide fail2ban the proper notifications, that plugin is called “WP fail2ban” and can be found here. You will need to make a few configuration changes to fail2ban to get things working, these are the configurations that worked for me on Fedora:

WordPress jail.local (/etc/fail2ban/jail.local):

[wordpress]
enabled  = true
filter   = wordpress
logpath  = /var/log/messages
maxretry = 5
action   = iptables-multiport[name=wordpress, port="http,https", protocol=tcp]
           sendmail-whois[name=Wordpress, dest=root, sender=fail2ban@jackson-brain.com, sendername="The WordPress Bouncer"]

WordPress filter (/etc/fail2ban/filter.d/wordpress.conf):

_daemon = wordpress

# Option:  failregex
# Notes.:  regex to match the password failures messages in the logfile. The
#          host must be matched by a group named "host". The tag "" can
#          be used for standard IP/hostname matching and is only an alias for
#          (?:::f{4,6}:)?(?P[\w\-.^_]+)
# Values:  TEXT
#
failregex = ^%(__prefix_line)sAuthentication failure for .* from $
            ^%(__prefix_line)sBlocked authentication attempt for .* from $
            ^%(__prefix_line)sBlocked user enumeration attempt from $

# Option:  ignoreregex
# Notes.:  regex to ignore. If this regex matches, the line is ignored.
# Values:  TEXT
#
ignoreregex =

For Webmin, all I needed to do was update the [webmin-auth] section to properly reflect the location of failed webmin login attempts:

[webmin-auth]

enabled = true
filter  = webmin-auth
action  = iptables-multiport[name=webmin,port="10007"]
logpath = /var/log/secure

Webmin makes certain things easy when managing remote Unix/Linux servers, some things it makes more difficult if only because its modules don’t get updated very often. Shorewall makes managing large iptables rule sets easy but it’s Webmin interface is outdated. For instance the Blacklist section in the Shorewall Webmin Module directs to ‘/etc/shorewall/blacklist’ which according to the Shorewall documentation: ‘The blacklist file is used to perform static blacklisting by source address (IP or MAC), or by application. The use of this file is deprecated and beginning with Shorewall 4.5.7, the file is no longer installed.’

The Shorewall Webmin module still directs the user to this file for modification and because of this changes are not effected. The file you should be looking at is ‘/etc/shorewall/blrules’ as documented here.

When attempting to compile PHP on Centos 6.x, you might run into a compile error such as:
php pdo/php_pdo.h: No such file or directory
and
php pdo/php_pdo_driver.h: No such file or directory

These files do exist, just not in the location that the configure script looks for them. There are two ways to fix this, the first would be to modify the configure script to look in the proper place and the second would be to create two symbolic links for the rogue files. I chose the second method.

The files are in *ext/pdo/, but the configure script looks in *pdo/ so we want to make the pdo directory and create the links within:

make clean
mkdir pdo
ln -s ext/pdo/php_pdo.h pdo/php_pdo.h
ln -s ext/pdo/php_pdo_driver.h pdo/php_pdo_driver.h

OR, more simply…

ln -s ./ext/pdo

Now re-configure and compile. Done.

Meld is a neat diff tool, and when trying to compare two perl scripts over SSH the other day Centos just crapped out saying bad things about GConf, ORBit not being configured for TCP/IP and various things about stale NFS locks. Basically “Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash.”

All baloney.

To fix this I uninstalled the pre-compiled Meld from the Centos repos and rolled my own. Worked out great:

sudo yum remove meld
wget https://git.gnome.org/browse/meld/snapshot/https://git.gnome.org/browse/meld/snapshot/meld-1.8.4.tar.gz
tar -xvzf https://git.gnome.org/browse/meld/snapshot/meld-1.8.4.tar.gz
cd meld-1.8.4
make prefix=/usr/local/
sudo make install
meld

So before you go hunting around for peculiar configuration changes, try to build your own.

HHVM, the HipHop Virtual Machine created by Facebook is pretty cool, bringing C like performance and scale-ability to the PHP crowd. PHP certainly is not my favorite language but if you need something deployed on a generic platform quickly and network accessible, PHP is a good starting point.

There are a number of good resources on the net for this, but like all the posts here they are notes for myself and what worked for me.

Get the required packages for compilation:

sudo yum install git svn cpp make autoconf automake libtool patch memcached gcc-c++ cmake wget boost-devel mysql-devel pcre-devel gd-devel libxml2-devel expat-devel libicu-devel bzip2-devel oniguruma-devel openldap-devel readline-devel libc-client-devel libcap-devel binutils-devel pam-devel elfutils-libelf-devel

For CentOS there are a couple libraries that are too old for HHVM, and we have to compile/install them first. Like libmcrypt, we need the development library as well:

cd ~/dev
wget 'http://pkgs.repoforge.org/libmcrypt/libmcrypt-devel-2.5.7-1.2.el6.rf.x86_64.rpm'
wget 'http://pkgs.repoforge.org/libmcrypt/libmcrypt-2.5.7-1.2.el6.rf.x86_64.rpm'
rpm -Uhv libmcrypt-*.rpm

and GMP:

wget https://gmplib.org/download/gmp/gmp-5.1.3.tar.bz2
tar jxf gmp-5.1.3.tar.bz2 && cd gmp-5.1.3/
./configure --prefix=/usr/local/gmp
make && make install
cd ..

and mpfr:

wget http://www.mpfr.org/mpfr-current/mpfr-3.1.2.tar.bz2
tar jxf mpfr-3.1.2.tar.bz2 ;cd mpfr-3.1.2/
./configure --prefix=/usr/local/mpfr -with-gmp=/usr/local/gmp
make && make install 
cd ..

and mpc:

wget http://www.multiprecision.org/mpc/download/mpc-1.0.1.tar.gz
tar xzf mpc-1.0.1.tar.gz ;cd mpc-1.0.1
./configure --prefix=/usr/local/mpc -with-mpfr=/usr/local/mpfr -with-gmp=/usr/local/gmp
make &&make install
cd ..

make sure you have ncurses and ncurses-devel:

sudo yum install ncurses-devel

now build the cmake utility:

wget http://www.cmake.org/files/v2.8/cmake-2.8.12.1.tar.gz
tar -xvzf cmake-2.8.12.1.tar.gz 
cd cmake-2.8.12.1
./configure
make
sudo make install

Google Glog, for this we are going to need libcurl which is not mentioned in the docs:

sudo yum install libcurl

now glog:

svn checkout http://google-glog.googlecode.com/svn/trunk/ google-glog
cd google-glog/
./configure --prefix=/usr
make
sudo make install

Now we need jemalloc:

wget http://www.canonware.com/download/jemalloc/jemalloc-3.0.0.tar.bz2
tar -xvzf jemalloc-3.0.0.tar.bz2 
cd jemalloc-3.0.0
./configure --prefix=/usr
make
sudo make install

libmemcached, I was already running a Memcache serer under a few services, so for me I had to un-install and rebuild:

wget https://launchpad.net/libmemcached/1.0/1.0.17/+download/libmemcached-1.0.17.tar.gz
tar -xvzf libmemcached-1.0.17.tar.gz 
cd libmemcached-1.0.17
./configure --prefix=/usr
make
sudo service memcached stop
sudo yum remove libmemcached
sudo make install
cd ..

Now tbb, NOTE – Centos 6.3 does have tbb available but again it’s not recent enough so avoid the temptation to yum it. I also had to make some adjsutments to the install process, though this might be avoided with a more specific set of config flags:

wget 'http://threadingbuildingblocks.org/sites/default/files/software_releases/source/tbb40_20120613oss_src.tgz'
tar -xvzf tbb40_20120613oss_src.tgz 
cd tbb40_20120613oss
make
sudo mkdir -p /usr/include/serial
sudo cp -a include/serial/* /usr/include/serial/
sudo mkdir -p /usr/include/tbb
sudo cp -a include/tbb/* /usr/include/tbb/
sudo make install
sudo cp build/linux_intel64_gcc_cc4.7.3_libc2.12_kernel2.6.32_release/libtbb.so.2 /usr/lib64/
sudo ln -s /usr/lib64/libtbb.so.2 /usr/lib64/libtbb.so

libdwarf, I got lazy here and yum’d it and it worked, so moving on:

sudo yum install libdwarf libdwarf-tools libdwarf-devel

now boost, this one also took a while. Make sure you have python and related devel libs installed:

wget http://downloads.sourceforge.net/project/boost/boost/1.50.0/boost_1_50_0.tar.bz2
tar -xvjf boost_1_50_0.tar.bz2 
cd boost_1_50_0
./bootstrap.sh --prefix=/usr --libdir=/usr/lib
./bjam --layout=system install
sudo ./bjam --layout=system install

and finally, GCC. this one takes a awhile, so find something to do. I actually used the newest version of GCC from http://gcc.gnu.org/releases.html which requires a newer version of the cmake utility::

wget http://ftp.gnu.org/gnu/gcc/gcc-4.8.2/gcc-4.7.3.tar.bz2
tar jxf gcc-4.8.2.tar.bz2 ;cd gcc-4.8.2
./configure --prefix=/usr/local/gcc -enable-threads=posix -disable-checking -disable-multilib -enable-languages=c,c++ -with-gmp=/usr/local/gmp -with-mpfr=/usr/local/mpfr/ -with-mpc=/usr/local/mpc/

this compile line worked better for me on a seemingliy identical Centos image:

./configure --prefix=/usr --with-libdir=lib64 --with-gmp=/usr/lib64 --with-mpfr=/usr/lib64 --with-mpc=/usr/lib64

Now we are ready for HHVM. this is a large download even through GIT:

git clone git://github.com/facebook/hhvm.git
cd hhvm
export CMAKE_PREFIX_PATH=/usr
export CMAKE_PREFIX_PATH=`pwd`/../usr
export HPHP_HOME=`pwd`

Build HHVM:

git submodule init
git submodule update --recursive
export HPHP_HOME=`pwd`
cmake .
make

If this works, you’re home. Check your install with the version command:

hhvm --version

This is not a walk-through, this is a code dump and much like a trash dump you are going to have to do a little digging to find anything useful within. So, say you have gitweb running on your git revision server, and a remote repo somewhere else that you’d like to be able to see which commit the remote repo is sitting at from the gitweb page. This is how:

The first version of this code only handled one repo, this one can be branched into as many as you need.

First off, you need to setup an extremely under privileged user on the host the remote repo sits on. Then you’ll need to install PHP with the ssh2_connect extension on the GitWeb host, along with SSH on both machines. THEN you need to create a set of keys on the remote repo machine and import them to the GitWeb host to use for authentication.

The php query script:

// We are using priv/pub key pair for this, they are protected with .htacess
// !!!! The user for this MUST be highly underpriv'd to avoid the risk of dangerous command execution. We do simple sanitation but the enemy is crafty. !!!!
// 
// note: if you use the url instead of the ip in the connection setup, you will need to update the AllowUsers directive in the sshd config.
// We are restricting access for revuser to the IP of this server only.
class GetHead {

    // the no-priv's user on the host of the remote repo
    protected $sshUser = 'revuser';
    
    // the IP of the remote repo host
    protected $hostIP = '10.16.1.21';
    
    // the SSH port on the remote repo host
    protected $hostPort = 22;
    
    // the path to the local rsa public key
    protected $rsaPub = '/var/www/git/auth/id_rsa.pub';
    
    // the path to the local RSA private key
    protected $rsaPriv = '/var/www/git/auth/id_rsa';

    /**
     * This function queries the remote host via SSH to determine the current HEAD revision of
     * each of the remote repos described within the incoming data.
     *
     * $var $data The JSON encoded data from the ajax request.
     * @return void
     */
    public function GetHead($data) {

        // our return array
        $rDAta = new stdClass();

        // where our json data will go
        $jData = new stdClass();

        // the incoming json deata decoded
        $jData = json_decode($_GET['paths']);

        // assume we are going to succedd
        $rData->success = true;

        // configure the connection using the local keys
        $connection = ssh2_connect($this->hostIP, $this->hostPort, array('hostkey' => 'ssh-rsa'));

        // attempt to authenticate
        if (ssh2_auth_pubkey_file($connection, $this->sshUser, $this->rsaPub, $this->rsaPriv)) {

            // iterate through the local repos and retrieve their HEAD
            foreach ($jData as $name => $local) {

                // the command we are going to exec
                $cmd = 'git --git-dir=' . $local->path . '.git --work-tree=' . $local->path . ' rev-parse HEAD';

                // capture the return stream, blocking is set to ensure that we have data before we try to read from the stream
                $stream = ssh2_exec($connection, $cmd);
                stream_set_blocking($stream, true);

                // $result is the current head revision
                $result = stream_get_contents($stream);

                // close the stream 
                fclose($stream); 

                // make sure we have something and it's not FALSE
                if (!empty($result)) {

                    // the return data
                    $rData->repos->$name->success = true;
                    $rData->repos->$name->head = str_replace(PHP_EOL, '', $result);
                } else {

                    // return the error message
                    $rData->repos->$name->success = false;
                    $rData->repos->$name->error = 'Error retrieving HEAD of remote repository.';
                }
            }
        } else {
            // fail
            $rData->success = false;
            $rData->error = 'SSH Authentication Failed, or missing PHP library.';
        }

        // close the connection
        ssh2_exec($connection, 'exit');
        unset($connection);
        
        // return the data
        return $rData;
    }
}

if ($_SERVER['REQUEST_METHOD'] === 'GET' && !empty($_GET['paths'])) {

    // the incoming data
    $data = filter_input($_GET['paths']);

    // init the class and attempt to get some answers
    $head = new GetHead($data);

    // send the data back as a json obj
    header('Content-Type: application/json');
    echo json_encode($head->getHead());
}else{

    // they are not asking the right questions
    echo 'Go away Dave.';
}
// done
die();

And the Javascript. This script requires jQuery for the AJAX routine, it’s job is to query the above PHP to retrieve the current revision.

// update this object with the repos and their paths on production
var repoObject = {
    'repo.git' : { 
        'repo1' : {
            'path'   : '/var/www/vhosts/remoteRepo1/',
            'branch' : 'master'
        },
        'repo2' : {
            'path'   : '/var/www/vhosts/remoteRepo2/',
            'branch' : 'master'
        }
    }
};
// this is our success function
function querySuccess(data){
    // if we have errors, display them
    if( 'error' in data ){
        $('#query_error').text(data.error);
    }else{
        for(repo in data.repos){
            if(data.repos.hasOwnProperty(repo)) {
                var repoObj = data.repos[repo];
                $('.' + repoObj.head).addClass('currentHead ' + repo);
            }
        }
    }
}
// we run this on document ready for great justice
$(function(){
    var repoTitle = $('#project_name').val();
    // only check rev if we are within a recognized repo
    if (  repoTitle in repoObject) {
        var rData = {
            'paths' : JSON.stringify(repoObject[repoTitle])
            };
        $.ajax({
            dataType: 'json',
            url: '/get_head.php',
            data: rData,
            success: function(data){
                querySuccess(data);
            },
            error: function(){
                $('#query_error').text('There was an error connecting to the server or parsing the response.');
            },
            beforeSend: function(){
                $('#busy_anim').css('display','inline-block');
            },
            complete: function(){
                $('#busy_anim').css('display','none');
            }
        });
    }
});

Sometimes I fuck up, I know it’s hard to fathom but it happens. With women it’s usually permanent, they might be smiling and laugh at your jokes but the storm is brewing and she will have her revenge. Fortunately GIT is more willing to work with me in reconciling my mistakes. Hello GIT rebase.

There is enough documentation out there on rebase, so I’m just going to make a quick note here so that I can avoid swimming through Google results and the GIT manual.

Revert the commit:

git revert vd67abx2

Delete the last commit on branch master:

git push repoName +dc61lb62^:master

Delete a previous commit down to the last parent:

git rebase -i dc61bb12^

When the editor pops up, delete the lines of the commits you want to expunge from the repo. If you want to edit the commit message, change ‘pick’ to ‘edit’.

As a general rule, file systems degrade in performance as they become populated. This especially holds true for random reads within a single directory. I like to call this, The Irrevocable Solution. Delete all files within a directory older than 1 day.

Within the target directory:

sudo find . -type f -mtime +1 -exec rm -f {} +

The internet is a bastion of opinion and bullshit. SELinux is a rather difficult bride and as such there is much disinformation regarding how to keep her happy. I have an Apache application that needs to update it’s own cache files and the Fedora installation it was under had the file under the web root with a context of ‘httpd_sys_content_t’. This didn’t work and many of the first suggestions I came across said either disable SELinux or set it to some peculiar context like ‘public_content_rw_t’ and then altering an SELinux boolean to allow such access. We can have no such shenanigans around here, this is how it should be done:

sudo chcon -R -t -P httpd_sys_content_rw_t /var/www/path/to/dir/

UPDATE: There is a new version of this effort.

I needed to modify a local git servers web interface to indicate which revision a remote repo was currently on. This is what happened:

The PHP:

// make sure we have what we need
if( $_SERVER['REQUEST_METHOD'] == 'GET' && !empty($_GET['path'])){

    // our return array
    $return = array();

    // build the git command with the supplied path
    $gitCmd = 'git --git-dir='. $_GET['path'] . '.git --work-tree=' . $_GET['path'] . ' rev-parse HEAD';

    // configure the connection using the local keys
    $connection = ssh2_connect('host.domain.tld', 22, array('hostkey' => 'ssh-rsa'));

    // attempt to authenticate
    if( ssh2_auth_pubkey_file($connection, 'ssh_user', '/path/to/id_rsa.pub', '/path/to/id_rsa') ){

	// capture the return stream, blocking is set to ensure that we have data before we try to read from the stream
	$stream = ssh2_exec($connection, $gitCmd);
	stream_set_blocking($stream, true);

	// $result is the current head revision
	$result = stream_get_contents($stream);

	// make sure we have something and it's not FALSE
	if(!empty($result)){
	    $return['head'] = $result;
	}else{
	    $return['error'] = 'Error retrieving HEAD of remote repository.';
	}
    }else{
	// fail
	$return['error'] = 'SSH Authentication Failed.';
    }

    // send the data back as a json obj
    echo json_encode($return);
}

// done
die();

And the JS:

// update this object with the repos and their paths
var repoObject = {
    'repoName.git' : {
        'path' : '/path/to/local/repo/'
    }
};
// this is our success function
function querySuccess(data){
    // if we have errors, display them
    if( 'error' in data ){
	$('#query_error').text(data.error);
    }else{
        $('.' + data.head).addClass('currentHead');
    }
}
// we run this on document ready for great justice
$(function(){
    var repoTitle = $('#project_name').val();
    // only check rev if we are within a recognized repo
    if (  repoTitle in repoObject) {
        var rData = {
            'path' : repoObject[repoTitle].path
	    };
        $.ajax({
            dataType: 'json',
            url: '/get_head.php',
            data: rData,
            success: function(data){
	        querySuccess(data);
            },
            beforeSend: function(){
                $('#busy_anim').css('display','inline-block');
            },
            complete: function(){
                $('#busy_anim').css('display','none');
            }
        });
    }
});

You are going to want to ensure that you protect the keys you generate to do this, in this example they are in the web directory only protected by Apache.