Skip navigation

Category Archives: Linux

Updating the postgresql timescale-db extension from pre 0.8.0-dev builds – you must re-create extension and hyper-tables, ie:
CREATE database dbname;
\c dbname
CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;
SELECT create_hypertable('time_table', 'time');

This is the second time I’ve run into this, so I’m making a note of it. If you’re compiling and installing git/gitweb outside of a package manager, the selinux file contexts might not be correct and prevent apache from being able to execute the gitweb CGI script. Something like this will appear in your /var/log/audit.log:
avc: denied { search } for pid=16678 comm="gitweb.cgi" dev=sda2 ...
avc: denied { read } for pid=16678 comm="gitweb.cgi" dev=sda2 ...
avc: denied { open } for pid=16678 comm="gitweb.cgi" dev=sda2 ...

To fix issue
$ sudo chcon -u httpd_git_script_exec_t /path/to/gitweb.cgi

If that doesn’t work your selinux policies might be different, you can try:
sudo restorecon -v /path/to/gitweb/dir/*

This was a real annoyance to sort out, and I’ll likely need to do this again on the next kernel update so I’m documenting it here. The Corsair K70 is a great device, ignoring the pretty lights it’s a solid input device though it does have one particularly annoying functional aspect in various versions of Linux and that is an approximate 12-30 second delay during boot where the USB polling routine fails to enumerate the device (during report ret.) and time out before continuing. The following pertains to Ubuntu and Fedora/CentOS.

In your boot log (/var/log/boot.log) you will see a series of logged events similar to this:

workstation kernel: [   11.945167] hid-generic 0003:1B1C:1B13.0002: usb_submit_urb(ctrl) failed: -1
workstation kernel: [   11.945175] hid-generic 0003:1B1C:1B13.0002: timeout initializing reports
workstation kernel: [   11.945284] input: Corsair Corsair K70 RGB Gaming Keyboard  as /devices/pci0000:00/0000:00:14.0/usb2/2-10/2-10:1.1/0003:1B1C:1B13.0002/input/input15
workstation kernel: [   11.945373] hid-generic 0003:1B1C:1B13.0002: input,hidraw1: USB HID v1.11 Keyboard [Corsair Corsair K70 RGB Gaming Keyboard ] on usb-0000:00:14.0-10/input1
workstation kernel: [   21.954441] hid-generic 0003:1B1C:1B13.0003: timeout initializing reports
workstation kernel: [   21.954651] hid-generic 0003:1B1C:1B13.0003: hiddev0,hidraw2: USB HID v1.11 Device [Corsair Corsair K70 RGB Gaming Keyboard ] on usb-0000:00:14.0-10/input2
workstation kernel: [   31.967629] hid-generic 0003:1B1C:1B13.0004: usb_submit_urb(ctrl) failed: -1
workstation kernel: [   31.967642] hid-generic 0003:1B1C:1B13.0004: timeout initializing reports
workstation kernel: [   31.967762] hid-generic 0003:1B1C:1B13.0004: hiddev0,hidraw3: USB HID v1.11 Device [Corsair Corsair K70 RGB Gaming Keyboard ] on usb-0000:00:14.0-10/input3

As you can see the K70 is actually composed of 3 different input devices. The first being the light controller, the second being the the media controls and the third is the keyboard device which eventually loads. The first two fail during report retrieval and are the cause of the delay, this is likely a kernel bug and may take time before support is added. In the meantime we can get around the problem by introducing a usb “quirks” directive to instruct the kernel to not wait for these devices to be recognized/reported and continue.

To do this, find the devices vendor and device IDs:
~$ lsusb -v
which gives something like
Bus 002 Device 003: ID 1b1c:1b13 Corsair
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 2.00
bDeviceClass 0 (Defined at Interface level)
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 64
idVendor 0x1b1c Corsair
idProduct 0x1b13

bcdDevice 1.30
iManufacturer 1
iProduct 2
iSerial 3
bNumConfigurations 1
Configuration Descriptor:
<<<******etc, etc******>>>

We need the ‘idVendor’ and ‘idProduct’ values to construct the quirks directive. Now open a terminal and use vim/nano/gedit/*whatever to open ‘/etc/modprobe.d/usbhid.conf’ and add the following:
options usbhid quirks=0x1B1C:0x1B13:0x20000000

The syntax of this directive is ‘usbhid quirks=vendor:product:quirk’, where ‘quirk’ is the defined instruction “HID_QUIRK_NO_INIT_REPORTS” defined in the 3.16 kernel sources ‘linux/hid.h’ -> #define HID_QUIRK_NO_INIT_REPORTS 0x20000000 This simply instructs the kernel not to attempt to generate a device report for the K70 on boot.

Now we need to rebuild initramfs to include the change in the root file system.
Ubuntu:~$ sudo update-initramfs -u
Fedora:~$ sudo dracut -f

Now reboot and notice there’s no more delay.

Tasked with archiving large groups of tables, moving from one database to another. There are a few caveats involved depending on the architecture of the tables being moved, specifically bugs in MySQL 5.1, 5.5, 5.6 and maybe a few more versions documented here. this procedure handles the AUTO_INCREMENT issue, but will not alter tables containing various types of indexes. There are parameters for origin and destination databases in case you make a mistake and need to move one back. The archiveTable parameter is a boolean value 1/0, the rest is fairly straight forward.

DROP PROCEDURE IF EXISTS archive_tables;
-- set a temp delimiter for procedure construction
DELIMITER $$
-- create the procedure
CREATE PROCEDURE archive_tables(tablePrefix VARCHAR(32), originTable VARCHAR(32), destinationTable VARCHAR(32), archiveTable BOOLEAN)
BEGIN
    -- declare the variables
    DECLARE done INT DEFAULT FALSE;
    DECLARE tableName VARCHAR(50);
    DECLARE newTableName VARCHAR(70);
    DECLARE mv_query VARCHAR(1000);
    DECLARE alt_query VARCHAR(1000);
	
	-- create the cursor with the selected tables
    DECLARE cur1 CURSOR FOR SELECT TABLE_NAME 
	    FROM information_schema.TABLES 
		WHERE TABLE_NAME LIKE CONCAT(tablePrefix,'%') 
		AND TABLE_SCHEMA=originTable;
	-- this turns 'done' TRUE when there are no more tables
    DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;

	-- begin
    OPEN cur1;
    read_loop: LOOP
	    -- push the current cursor element into the tableName var
        FETCH cur1 INTO tableName;
		-- if we are done, stop
        IF done THEN
            LEAVE read_loop;
        END IF;
        SET newTableName = CONCAT(destinationTable,'.',tableName);

		-- create the rename query
        SET mv_query = CONCAT('RENAME TABLE ', tableName, ' TO ', newTableName);
        SET @mvQuery = mv_query;

		-- exec rename
        PREPARE stmt FROM @mvQuery;
        EXECUTE stmt;
        DEALLOCATE PREPARE stmt;

        -- are we archiving the relocated tables?
		-- Note: This engine will not work with all tables, there is also a bug related to AI columns
		--       documented here: http://bugs.mysql.com/bug.php?id=37871 (Dev is running 5.1.73) The
		--       temp workaround is setting AUTO_INCREMENT to 0, but even this is not sufficient for
		--       all tables. I suggest not trying to use this feature even though the benefits are many.
		IF archiveTable THEN
		    
			-- create engine conversion query
		    SET alt_query = CONCAT('ALTER TABLE ', newTableName, ' AUTO_INCREMENT=0 ENGINE=archive');
			SET @altQuery = alt_query;
			
			-- set the engine attribute
			PREPARE stmt FROM @altQuery;
            EXECUTE stmt;
            DEALLOCATE PREPARE stmt;
		END IF;
    END LOOP;
END;

The errors encountered during configure:
checking for sysvipc shared memory support... no
checking for mmap() using MAP_ANON shared memory support... no
checking for mmap() using /dev/zero shared memory support... no
checking for mmap() using shm_open() shared memory support... no
checking for mmap() using regular file shared memory support... no
checking "whether flock struct is linux ordered"... "no"
checking "whether flock struct is BSD ordered"... "no"
configure: error: Don't know how to define struct flock on this system, set --enable-opcache=no

I encountered these on a local system recently, a system where 5.5.12 had successfully compiled WITH opcache several weeks prior. One of the major performance advantages of the 5.5 generation is the opcache extension so disabling it was not an option. Long story short flock (file lock) structuring is passed to make by libtdl which is a part of the GNU libtool family and this info is required by the opcache extension to establish part of the memory mapping strategy during compilation. How these became lost or corrupted since the last install is a mystery, but I’m not Angela Lansbury and I need this to work because there is a particular LDAP related bugfix that may impact us so this is what needs to be done:

$sudo yum reinstall libtool libtool-ltdl libtool-ltdl-devel

If you start seeing messages like
validating @0xb4a348a98: choices-st.truste.com AAAA: no valid signature found
validating @0xb4224288: mozilla.com SOA: no valid signature found
validating @0xb42f74910: choices-st.truste.com AAAA: no valid signature found

in your syslog, then check your BIND config. On RedHat systems it’s located in (/etc/named.conf) and if DNSEC is enabled as it should be it will contain a set of configuration options that read:
dnssec-enable yes;
dnssec-validation yes;
dnssec-validation auto;
dnssec-lookaside auto;

The ambiguity here resides in the config line dnssec-validation yes; which instructs named to validate the signed keys but without further direction does not provide a set of root keys to compare against, which results in named not being able to validate the signatures.

To correct this, change the ‘yes’ option to ‘auto’ which will instruct named to use the set of compiled root keys that it ships with. Your DNSSEC should look something like this:
dnssec-enable yes;
dnssec-validation auto;
dnssec-lookaside auto;

Restart BIND/named and move on.

WordPress and security are not the best of friends, but if you’re going to be dragged over the coals by Ivan you might as well make him work for it. Fail2Ban is a great little service to help stall brute force attempts against SSH and similar auth methods, it can also be used to monitor and block persistent failed authentications against WordPress and Webmin. Since wordpress does not automatically log failed login attempts, a simple plugin is required to provide fail2ban the proper notifications, that plugin is called “WP fail2ban” and can be found here. You will need to make a few configuration changes to fail2ban to get things working, these are the configurations that worked for me on Fedora:

WordPress jail.local (/etc/fail2ban/jail.local):

[wordpress]
enabled  = true
filter   = wordpress
logpath  = /var/log/messages
maxretry = 5
action   = iptables-multiport[name=wordpress, port="http,https", protocol=tcp]
           sendmail-whois[name=Wordpress, dest=root, sender=fail2ban@jackson-brain.com, sendername="The WordPress Bouncer"]

WordPress filter (/etc/fail2ban/filter.d/wordpress.conf):

_daemon = wordpress

# Option:  failregex
# Notes.:  regex to match the password failures messages in the logfile. The
#          host must be matched by a group named "host". The tag "" can
#          be used for standard IP/hostname matching and is only an alias for
#          (?:::f{4,6}:)?(?P[\w\-.^_]+)
# Values:  TEXT
#
failregex = ^%(__prefix_line)sAuthentication failure for .* from $
            ^%(__prefix_line)sBlocked authentication attempt for .* from $
            ^%(__prefix_line)sBlocked user enumeration attempt from $

# Option:  ignoreregex
# Notes.:  regex to ignore. If this regex matches, the line is ignored.
# Values:  TEXT
#
ignoreregex =

For Webmin, all I needed to do was update the [webmin-auth] section to properly reflect the location of failed webmin login attempts:

[webmin-auth]

enabled = true
filter  = webmin-auth
action  = iptables-multiport[name=webmin,port="10007"]
logpath = /var/log/secure

Webmin makes certain things easy when managing remote Unix/Linux servers, some things it makes more difficult if only because its modules don’t get updated very often. Shorewall makes managing large iptables rule sets easy but it’s Webmin interface is outdated. For instance the Blacklist section in the Shorewall Webmin Module directs to ‘/etc/shorewall/blacklist’ which according to the Shorewall documentation: ‘The blacklist file is used to perform static blacklisting by source address (IP or MAC), or by application. The use of this file is deprecated and beginning with Shorewall 4.5.7, the file is no longer installed.’

The Shorewall Webmin module still directs the user to this file for modification and because of this changes are not effected. The file you should be looking at is ‘/etc/shorewall/blrules’ as documented here.

When attempting to compile PHP on Centos 6.x, you might run into a compile error such as:
php pdo/php_pdo.h: No such file or directory
and
php pdo/php_pdo_driver.h: No such file or directory

These files do exist, just not in the location that the configure script looks for them. There are two ways to fix this, the first would be to modify the configure script to look in the proper place and the second would be to create two symbolic links for the rogue files. I chose the second method.

The files are in *ext/pdo/, but the configure script looks in *pdo/ so we want to make the pdo directory and create the links within:

make clean
mkdir pdo
ln -s ext/pdo/php_pdo.h pdo/php_pdo.h
ln -s ext/pdo/php_pdo_driver.h pdo/php_pdo_driver.h

OR, more simply…

ln -s ./ext/pdo

Now re-configure and compile. Done.

Meld is a neat diff tool, and when trying to compare two perl scripts over SSH the other day Centos just crapped out saying bad things about GConf, ORBit not being configured for TCP/IP and various things about stale NFS locks. Basically “Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash.”

All baloney.

To fix this I uninstalled the pre-compiled Meld from the Centos repos and rolled my own. Worked out great:

sudo yum remove meld
wget https://git.gnome.org/browse/meld/snapshot/https://git.gnome.org/browse/meld/snapshot/meld-1.8.4.tar.gz
tar -xvzf https://git.gnome.org/browse/meld/snapshot/meld-1.8.4.tar.gz
cd meld-1.8.4
make prefix=/usr/local/
sudo make install
meld

So before you go hunting around for peculiar configuration changes, try to build your own.