Skip navigation

Category Archives: work

This is the second time I’ve run into this, so I’m making a note of it. If you’re compiling and installing git/gitweb outside of a package manager, the selinux file contexts might not be correct and prevent apache from being able to execute the gitweb CGI script. Something like this will appear in your /var/log/audit.log:
avc: denied { search } for pid=16678 comm="gitweb.cgi" dev=sda2 ...
avc: denied { read } for pid=16678 comm="gitweb.cgi" dev=sda2 ...
avc: denied { open } for pid=16678 comm="gitweb.cgi" dev=sda2 ...

To fix issue
$ sudo chcon -u httpd_git_script_exec_t /path/to/gitweb.cgi

If that doesn’t work your selinux policies might be different, you can try:
sudo restorecon -v /path/to/gitweb/dir/*

Tasked with archiving large groups of tables, moving from one database to another. There are a few caveats involved depending on the architecture of the tables being moved, specifically bugs in MySQL 5.1, 5.5, 5.6 and maybe a few more versions documented here. this procedure handles the AUTO_INCREMENT issue, but will not alter tables containing various types of indexes. There are parameters for origin and destination databases in case you make a mistake and need to move one back. The archiveTable parameter is a boolean value 1/0, the rest is fairly straight forward.

-- set a temp delimiter for procedure construction
-- create the procedure
CREATE PROCEDURE archive_tables(tablePrefix VARCHAR(32), originTable VARCHAR(32), destinationTable VARCHAR(32), archiveTable BOOLEAN)
    -- declare the variables
    DECLARE tableName VARCHAR(50);
    DECLARE newTableName VARCHAR(70);
    DECLARE mv_query VARCHAR(1000);
    DECLARE alt_query VARCHAR(1000);
	-- create the cursor with the selected tables
	    FROM information_schema.TABLES 
		AND TABLE_SCHEMA=originTable;
	-- this turns 'done' TRUE when there are no more tables

	-- begin
    OPEN cur1;
    read_loop: LOOP
	    -- push the current cursor element into the tableName var
        FETCH cur1 INTO tableName;
		-- if we are done, stop
        IF done THEN
            LEAVE read_loop;
        END IF;
        SET newTableName = CONCAT(destinationTable,'.',tableName);

		-- create the rename query
        SET mv_query = CONCAT('RENAME TABLE ', tableName, ' TO ', newTableName);
        SET @mvQuery = mv_query;

		-- exec rename
        PREPARE stmt FROM @mvQuery;
        EXECUTE stmt;

        -- are we archiving the relocated tables?
		-- Note: This engine will not work with all tables, there is also a bug related to AI columns
		--       documented here: (Dev is running 5.1.73) The
		--       temp workaround is setting AUTO_INCREMENT to 0, but even this is not sufficient for
		--       all tables. I suggest not trying to use this feature even though the benefits are many.
		IF archiveTable THEN
			-- create engine conversion query
		    SET alt_query = CONCAT('ALTER TABLE ', newTableName, ' AUTO_INCREMENT=0 ENGINE=archive');
			SET @altQuery = alt_query;
			-- set the engine attribute
			PREPARE stmt FROM @altQuery;
            EXECUTE stmt;
            DEALLOCATE PREPARE stmt;

The errors encountered during configure:
checking for sysvipc shared memory support... no
checking for mmap() using MAP_ANON shared memory support... no
checking for mmap() using /dev/zero shared memory support... no
checking for mmap() using shm_open() shared memory support... no
checking for mmap() using regular file shared memory support... no
checking "whether flock struct is linux ordered"... "no"
checking "whether flock struct is BSD ordered"... "no"
configure: error: Don't know how to define struct flock on this system, set --enable-opcache=no

I encountered these on a local system recently, a system where 5.5.12 had successfully compiled WITH opcache several weeks prior. One of the major performance advantages of the 5.5 generation is the opcache extension so disabling it was not an option. Long story short flock (file lock) structuring is passed to make by libtdl which is a part of the GNU libtool family and this info is required by the opcache extension to establish part of the memory mapping strategy during compilation. How these became lost or corrupted since the last install is a mystery, but I’m not Angela Lansbury and I need this to work because there is a particular LDAP related bugfix that may impact us so this is what needs to be done:

$sudo yum reinstall libtool libtool-ltdl libtool-ltdl-devel

If you start seeing messages like
validating @0xb4a348a98: AAAA: no valid signature found
validating @0xb4224288: SOA: no valid signature found
validating @0xb42f74910: AAAA: no valid signature found

in your syslog, then check your BIND config. On RedHat systems it’s located in (/etc/named.conf) and if DNSEC is enabled as it should be it will contain a set of configuration options that read:
dnssec-enable yes;
dnssec-validation yes;
dnssec-validation auto;
dnssec-lookaside auto;

The ambiguity here resides in the config line dnssec-validation yes; which instructs named to validate the signed keys but without further direction does not provide a set of root keys to compare against, which results in named not being able to validate the signatures.

To correct this, change the ‘yes’ option to ‘auto’ which will instruct named to use the set of compiled root keys that it ships with. Your DNSSEC should look something like this:
dnssec-enable yes;
dnssec-validation auto;
dnssec-lookaside auto;

Restart BIND/named and move on.

WordPress and security are not the best of friends, but if you’re going to be dragged over the coals by Ivan you might as well make him work for it. Fail2Ban is a great little service to help stall brute force attempts against SSH and similar auth methods, it can also be used to monitor and block persistent failed authentications against WordPress and Webmin. Since wordpress does not automatically log failed login attempts, a simple plugin is required to provide fail2ban the proper notifications, that plugin is called “WP fail2ban” and can be found here. You will need to make a few configuration changes to fail2ban to get things working, these are the configurations that worked for me on Fedora:

WordPress jail.local (/etc/fail2ban/jail.local):

enabled  = true
filter   = wordpress
logpath  = /var/log/messages
maxretry = 5
action   = iptables-multiport[name=wordpress, port="http,https", protocol=tcp]
           sendmail-whois[name=Wordpress, dest=root,, sendername="The WordPress Bouncer"]

WordPress filter (/etc/fail2ban/filter.d/wordpress.conf):

_daemon = wordpress

# Option:  failregex
# Notes.:  regex to match the password failures messages in the logfile. The
#          host must be matched by a group named "host". The tag "" can
#          be used for standard IP/hostname matching and is only an alias for
#          (?:::f{4,6}:)?(?P[\w\-.^_]+)
# Values:  TEXT
failregex = ^%(__prefix_line)sAuthentication failure for .* from $
            ^%(__prefix_line)sBlocked authentication attempt for .* from $
            ^%(__prefix_line)sBlocked user enumeration attempt from $

# Option:  ignoreregex
# Notes.:  regex to ignore. If this regex matches, the line is ignored.
# Values:  TEXT
ignoreregex =

For Webmin, all I needed to do was update the [webmin-auth] section to properly reflect the location of failed webmin login attempts:


enabled = true
filter  = webmin-auth
action  = iptables-multiport[name=webmin,port="10007"]
logpath = /var/log/secure

Webmin makes certain things easy when managing remote Unix/Linux servers, some things it makes more difficult if only because its modules don’t get updated very often. Shorewall makes managing large iptables rule sets easy but it’s Webmin interface is outdated. For instance the Blacklist section in the Shorewall Webmin Module directs to ‘/etc/shorewall/blacklist’ which according to the Shorewall documentation: ‘The blacklist file is used to perform static blacklisting by source address (IP or MAC), or by application. The use of this file is deprecated and beginning with Shorewall 4.5.7, the file is no longer installed.’

The Shorewall Webmin module still directs the user to this file for modification and because of this changes are not effected. The file you should be looking at is ‘/etc/shorewall/blrules’ as documented here.

When attempting to compile PHP on Centos 6.x, you might run into a compile error such as:
php pdo/php_pdo.h: No such file or directory
php pdo/php_pdo_driver.h: No such file or directory

These files do exist, just not in the location that the configure script looks for them. There are two ways to fix this, the first would be to modify the configure script to look in the proper place and the second would be to create two symbolic links for the rogue files. I chose the second method.

The files are in *ext/pdo/, but the configure script looks in *pdo/ so we want to make the pdo directory and create the links within:

make clean
mkdir pdo
ln -s ext/pdo/php_pdo.h pdo/php_pdo.h
ln -s ext/pdo/php_pdo_driver.h pdo/php_pdo_driver.h

OR, more simply…

ln -s ./ext/pdo

Now re-configure and compile. Done.

Meld is a neat diff tool, and when trying to compare two perl scripts over SSH the other day Centos just crapped out saying bad things about GConf, ORBit not being configured for TCP/IP and various things about stale NFS locks. Basically “Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash.”

All baloney.

To fix this I uninstalled the pre-compiled Meld from the Centos repos and rolled my own. Worked out great:

sudo yum remove meld
tar -xvzf
cd meld-1.8.4
make prefix=/usr/local/
sudo make install

So before you go hunting around for peculiar configuration changes, try to build your own.

To restrict a ssh user to connecting from a single IP, add the following to the bottom of the SSHD config. (/etc/sshd/sshd_config)

AllowUsers user1 user2 user3@

Then bounce the SSH daemon. Done.

Needed to establish an SSH connection to poll another server for the current GIT revision of a repo.

This is an non-standard PECL package, you must compile and install it.

SELinux by default prevents Apache from certain network activity, it is very likely that when trying to use any of the SSH2 methods you will see several denials logged in your SElinux Audit logs. To fix this you will need to allow Apache this access:

sudo setsebool -P httpd_can_network_connect=1