MySQL to MariaDB — what??

When migrating from old Ubuntu to new RPM distro, I ended up with MariaDB. I guess this is the new M in LAMP. It looks and feels the same, but migration was a pain.
Some users had to be recreated on Maria, simply moving using the import/export SQL did not work. Well, it allowed me to clean up the database be deleting several old unused or un-needed users.
Finally, the moving of DB involved a lot of manual steps. First just create all the databases on new DB. Then export the DB using phpMyAdmin. Before importing, remove the “create DB SQL” and keep only the “INSERT” statements.
phpMyAdmin was the only saving grace. But latest 4.6.x needs PHP 5.5 and above, this was another headache. I had to run around to get all the right PHP rpms.

Moving off Ubuntu to RPM based

After several failed upgrades from Ubuntu, I finally decided to move off Ubuntu. The site was down for days, thanks to Ubuntu’s boot mess. Moving from 10.x to 12.x resulted in boot startup failure. Fixed that, then upgrade to 14.10 resulted in same mess.
Do they even sanity test?
I’ll detail out more stuff on how I migrated the web server.

‘sudo’ stopped working!

I wanted to do some maintenance and found out that my login cannot do sudo anymore.  This can be very disturbing.

luser:~$ sudo apt-get update
[sudo] password for luser:
luser is not in the sudoers file. This incident will be reported.

Report all you want, my login was part of sudo till a day ago. What changed?
Before you think of any intrusion, try to remember if any group ownership changes were done. If so, then that likely led to removal of user from sudoers file.
Luckily, I had done some changes and added a group:
usermod -G without the -a flag. Remember “-a” is needed, as it appends to existing groups, or you will get kicked out all of them.

So the fix this is to edit sudoers file or run usermod -a admin
To do so, I had to start the server in single user mode and drop to root shell. It’s a bit of a pain in Ubuntu. I prefer to run:
usermod -a admin

Apache failed to start after upgrading Ubuntu

The problem was that some libraries were not available to Apache.
luser:~# service apache2 start
* Starting web server apache2
apache2: Syntax error on line 166 of /etc/apache2/apache2.conf: Syntax error on line 33 of /etc/apache2/mods-enabled/mod-security.load: Cannot load /usr/lib/libxml2.so.2 into server: /usr/lib/libxml2.so.2: cannot open shared object file: No such file or directory
Action 'start' failed.

Many sites recommended to link libxml file.  Here is what I had to do.
luser:~# ls -l /usr/lib/libxml2.so.2
ls: cannot access /usr/lib/libxml2.so.2: No such file or directory
[email protected]:~# locate libxml2.so
/usr/lib/libxml2.so.2
/usr/lib/libxml2.so.2.9.4

Ubuntu says that file exists.. really?
luser:~# ls -l /usr/lib/libxml2.so.2
ls: cannot access /usr/lib/libxml2.so.2: No such file or directory

Ubuntu and some packages are known to do this when you upgrade.
Solution:
luser:~# cd /usr
luser:/usr# find . -name libxml2.so.2
./lib/powerpc-linux-gnu/libxml2.so.2
luser:/usr# sudo ln -s ./lib/powerpc-linux-gnu/libxml2.so.2 /usr/lib/libxml2.so.2

Start your web server.

Note: the above solution was executed on non-x86 OS.  You can link the file same way in x86/x64 host.

Back after a long gap

Been a while. Had to upgrade the Linux server.
Manually updating the WordPress. For some reason the WordPress upgrade process wants to write in the main htdocs directory. Why?
Have been manually updating the core and plugins.

Escaping special characters and & in awk

The awk command will accept escaped special characters that may be required in an output.
E.g:
Rename the following file by removing all the characters after the three character file extensions.

all_txt.zip?89&=1
backup.zip?0f3b97d464cbytrte&13856505
installer.zip?24hj4085&_23232
new_project_proj.zip?WD898f5c5csite&=108766
ls *\?* | awk -F . '{print "mv \""$0"\" " $1".zip"}'

This awk command will work for files that have common extension. To complete command and rename the files in same line, try

ls *\?* | awk -F . '{print "mv \""$0"\" " $1".zip"}' | sh<

Set your download using curl

Here is a quick and easy tip to download multiple links from a file.

for i in `cat mp3_links.txt`
do
curl -O $i -A "Mozilla/5.0 (Windows NT 6.0;
rv:11.0) Gecko/20100101 Firefox/11.0
sleep $[ ( $RANDOM % 10 )  + 4 ]s
done

Read the rest of this entry »

Enable local repo off the OS DVD iso

I have been working on adding a local repo so that yum can install the packages from local media before going out.
vi /etc/yum.repos/iso.repo

[MY-Local-ISO-repo]
baseurl=file:///mnt/repo
enabled=1

Note: Make sure label [MY-Local-ISO-repo] does not have any spaces.  Otherwise it will complain of ‘bad id for repo’


mkdir -p /mnt/repo/iso
mount -o loop linux_dvd_media.iso /mnt/repo/iso
yum clean all
cd /mnt/repo
createrepo .
find . --name "*GPG*"

then import the GPG keys. Here is an example

rpm --import /mnt/repo/iso/RPM-GPG-KEY-redhat-release
rpm --import /mnt/repo/iso/RPM-GPG-KEY-redhat-beta

That’s pretty much it.  Now you can use yum to install from locally.

Use awk to rename non-pattern files

I have some files that don’t have any unique naming sequence or convention.  Well, the only thing common is the extension, say .log.  I would like to rename the files in a sequence based on time stamp or on ls output.  Here is a sample list of files:

auditlog.20130123.log
db.Wed_Jan_23_15_31_47_GMT_2013.log
defaultlog_agent.log
receiver.log
receiver.Wed_Jan_23_12_01_48_GMT_2013.log
runner_os_component.log
runner_plugin_transmitter_queue.log
runnerplugin.Wed_Jan_23_14_33_02_GMT_2013.log
transmitter.log
transmitter_queue_spool_esp_prod.Wed_Jan_23_11_45_09_GMT_2013.log
transmitter.Wed_Jan_23_12_00_26_GMT_2013.log

I would prefer to use awk to rename these files.  Here is one simple method:

ls  *.log | awk -F '{ print "mv "$0"  mylogs_"NR".log "}' >rename_files.sh

This awk statement will create script which will rename all the files as mylogs_1.log, mylogs_2.log….
One can even keep part of the original file and rename the files.

ls  *.log | awk -F. '{ print "mv "$0"  $1_"NR".log "}' >rename_files.sh

I used “.” as the delimiter.  Now, files will look something like original_name_1.log etc.

😉

Simple HA for Apache on Linux

I have a very simple high-availability setup for Apache on Linux.  This setup works really well for small sites running of VMware or other virtualization setup.
First install two exactly same Linux VM.  This example uses CentOS.  You can setup one VM, and copy/clone it to create a second VM.
Log in to each host and make sure they have proper host names and FQDN.  Here is an example of /etc/hosts, you may have to edit /etc/sysconfig/network file as well.

192.168.10.135           www1  www1.example.com
192.168.10.128           www2  www2.example.com

Setup password less login between each node.

ssh-keygen -t dsa
cat .ssh/id_dsa.pub | ssh [email protected] 'mkdir ~/.ssh; cat >> ~/.ssh/authorized_keys'

The next steps are to be performed on each node, unless noted.  We have to add a repository to install some clustering packages.  Add ‘clusterlabs’ repo to the yum in: /etc/yum.repos.d/clusterlabs.repo

[clusterlabs]
name=High Availability/Clustering server technologies (epel-5)
baseurl=http://www.clusterlabs.org/rpm/epel-5
type=rpm-md
gpgcheck=0
enabled=1

Now, install the following packages.  This will take some time to complete.

yum -y intall httpd*
yum -y install glibc*
yum -y install gcc*
yum -y install lib*
yum -y install flex*
yum -y install net-snmp*
yum -y install OpenIPMI*
yum -y install python-devel
yum -y install perl*
yum -y install openhpi*
yum -y install cluster-glue*
yum -y install heartbeat*
yum -y install resource-agents-1.0.4-1.1.el5.x86_64
yum -y install resource-agents-debuginfo-1.0.4-1.1.el5.x86_64
##  If all goes well then we copy the ha conf files
cp `rpm -q heartbeat -d | awk -F/ '{print $1"/"$2"/"$3"/"$4"/"$5}'| head -1`/authkeys /etc/ha.d/
cp `rpm -q heartbeat -d | awk -F/ '{print $1"/"$2"/"$3"/"$4"/"$5}'| head -1`/ha.cf /etc/ha.d/
cp `rpm -q heartbeat -d | awk -F/ '{print $1"/"$2"/"$3"/"$4"/"$5}'| head -1`/haresources /etc/ha.d/
#There may be a better way of printing all this $ fields in awk!!

Once all the packages have been installed and files copied over, we now have to edit the three conf files.  Edit authkeys, just un-comment these lines.  You can pick either authentication, but sha and md5 are preferred.  For sha1, the file should look like this:

auth 2
#1 crc
2 sha1 Testing123
#3 md5 Hello!

Once done, change access permission on authkeys.  This file should not be readable by anyone for security purposes.

chmod 600 authkeys

Edit the ha.cf file, un-comment following lines:

logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 15
warntime 10
initdead 120
udpport 694
bcast eth0
auto_failback on
node www1.example.com # make user uname -n shows FQDN
node www1.example.com # if it just shows hostname, then put hostname!

We now have to make some changes to the httpd.conf file in each node.  Find and change this line:

Listen 80
to
Listen 192.168.10.222:80  # Setup any IP here, make sure no other host or interface is using it

If you start the httpd, it will fail.

service httpd start

This is normal, because Apache is looking for 192.168.10.222 on local interface(s).  We also have to make sure that Apache starts on reboots.

chkconfig httpd on

We will stick to some more default settings for now.  The default “DocumentRoot” for Apache is is /var/www/html.
Lets create a simple index.html page in /var/www/html in each node.  Here is sample page for first node:

<HTML>
<BODY>
<HEAD>
<TITLE>Sample Page</TITLE>
</HEAD>
<BODY BGCOLOR=”BLUE”>.<CENTER>.<H1>Web Page on node1</H1>
</BODY>
</HTML>

Copy the above page to node2, change “Web Page on node1” to “Web Page on node2”.  All set now, we just have to start the heartbeat service on each node.  You may have to start and stop it couple times.

/etc/init.d/heartbeat start

Now enter the IP address, 192.168.10.222,  in your browser.  If all goes well, you should see a blue page from node1.  Shutdown heartbeat on node1 and refresh the page in browser.  This time you will see a page served from second node.

🙂

←Older