Skip navigation

Tag Archives: mysql

AWS EBS volume UUID collisions on a DB recovery attempt. On a system that does not have the current volume with conflicting disk UUID mounted -mount and then un-mount the vol to replay any metadata changes made when the snapshot was taken, get the current UUID then set the unique UUID to prevent collisions.

$ mount /dev/xvdz && umount /dev/xvdz
$ xfs_admin -lu /dev/xvdz
$ xfs_admin -U some-unique-uuid-string-thing /dev/xvdz

Tasked with archiving large groups of tables, moving from one database to another. There are a few caveats involved depending on the architecture of the tables being moved, specifically bugs in MySQL 5.1, 5.5, 5.6 and maybe a few more versions documented here. this procedure handles the AUTO_INCREMENT issue, but will not alter tables containing various types of indexes. There are parameters for origin and destination databases in case you make a mistake and need to move one back. The archiveTable parameter is a boolean value 1/0, the rest is fairly straight forward.

DROP PROCEDURE IF EXISTS archive_tables;
-- set a temp delimiter for procedure construction
DELIMITER $$
-- create the procedure
CREATE PROCEDURE archive_tables(tablePrefix VARCHAR(32), originTable VARCHAR(32), destinationTable VARCHAR(32), archiveTable BOOLEAN)
BEGIN
    -- declare the variables
    DECLARE done INT DEFAULT FALSE;
    DECLARE tableName VARCHAR(50);
    DECLARE newTableName VARCHAR(70);
    DECLARE mv_query VARCHAR(1000);
    DECLARE alt_query VARCHAR(1000);
	
	-- create the cursor with the selected tables
    DECLARE cur1 CURSOR FOR SELECT TABLE_NAME 
	    FROM information_schema.TABLES 
		WHERE TABLE_NAME LIKE CONCAT(tablePrefix,'%') 
		AND TABLE_SCHEMA=originTable;
	-- this turns 'done' TRUE when there are no more tables
    DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;

	-- begin
    OPEN cur1;
    read_loop: LOOP
	    -- push the current cursor element into the tableName var
        FETCH cur1 INTO tableName;
		-- if we are done, stop
        IF done THEN
            LEAVE read_loop;
        END IF;
        SET newTableName = CONCAT(destinationTable,'.',tableName);

		-- create the rename query
        SET mv_query = CONCAT('RENAME TABLE ', tableName, ' TO ', newTableName);
        SET @mvQuery = mv_query;

		-- exec rename
        PREPARE stmt FROM @mvQuery;
        EXECUTE stmt;
        DEALLOCATE PREPARE stmt;

        -- are we archiving the relocated tables?
		-- Note: This engine will not work with all tables, there is also a bug related to AI columns
		--       documented here: http://bugs.mysql.com/bug.php?id=37871 (Dev is running 5.1.73) The
		--       temp workaround is setting AUTO_INCREMENT to 0, but even this is not sufficient for
		--       all tables. I suggest not trying to use this feature even though the benefits are many.
		IF archiveTable THEN
		    
			-- create engine conversion query
		    SET alt_query = CONCAT('ALTER TABLE ', newTableName, ' AUTO_INCREMENT=0 ENGINE=archive');
			SET @altQuery = alt_query;
			
			-- set the engine attribute
			PREPARE stmt FROM @altQuery;
            EXECUTE stmt;
            DEALLOCATE PREPARE stmt;
		END IF;
    END LOOP;
END;

When attempting to compile PHP on Centos 6.x, you might run into a compile error such as:
php pdo/php_pdo.h: No such file or directory
and
php pdo/php_pdo_driver.h: No such file or directory

These files do exist, just not in the location that the configure script looks for them. There are two ways to fix this, the first would be to modify the configure script to look in the proper place and the second would be to create two symbolic links for the rogue files. I chose the second method.

The files are in *ext/pdo/, but the configure script looks in *pdo/ so we want to make the pdo directory and create the links within:

make clean
mkdir pdo
ln -s ext/pdo/php_pdo.h pdo/php_pdo.h
ln -s ext/pdo/php_pdo_driver.h pdo/php_pdo_driver.h

OR, more simply…

ln -s ./ext/pdo

Now re-configure and compile. Done.