Saturday, August 20, 2011

ERROR: This RRD was created on another architecture

Recently I have to migrate our cacti installation to new machine during which i encounter few issues.

After copying the RRD files to new machines and accessing cacti web http://myserver/cacti gives empty graphs.  (wired)

After digging the logs (in apache error log) found out that RRD files is not portable between different architechture.

ERROR: This RRD was created on another architecture

In my case the old machine was 32bit arch with 32bit OS, and the new machine is 64bit with 64bit OS.

Quick googling tells that using rrddump and rrdrestore one can transfer RRD between architectures with ease.


To transfer an RRD between architectures, follow these steps:

1.  On the same system where the RRD was created, use rrdtool dump to export the data to XML format.

# for i in ./*.rrd;do rrdtool dump $i ../rrd/$i.xml;done
(to converted all the files in rrd directory to XML format)

2.  Transfer the XML dump to the target system.

3.  Run rrdtool restore to create a new RRD from the XML dump.

# for i in ./*.xml; do rrdtool restore "$i" "../rrd/${i%.xml}"; done

After this cacti was up and running with all the old graphs migrated successfully.

Saturday, August 13, 2011

Installing Nagios-plugins and NRPE on Centos 6

To monitor remote hosts CPU load, disk partitions, processes etc with Nagios, requires to install NRPE and nagios-plugins on the remote host.

Nagios-plugins and NRPE is not available from Centos official repositories, so first of all we need to configure RPMforge> repo from where to install the required packages.

Read my previous article before configuring and installing anything from third party repo.

Installing RPMforge on Centos 6

Download and install the rpmforge-release package. Choose one of the two links below, selecting to match your host's architecture. If you are unsure of which one to use you can check your architecture with the command uname -i

# rpm -Uvh

# rpm -Uvh

Do not forget to set the priority,

vi /etc/yum.repos.d/rpmforge.repo

name = RHEL $releasever - - dag
baseurl =$basearch/rpmforge
mirrorlist =
#mirrorlist = file:///etc/yum.repos.d/mirrors-rpmforge
enabled = 1
protect = 0
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rpmforge-dag
gpgcheck = 1

Now everything is setup now you can install nagios-plugins and nrpe packages from RPMforge repo

# yum install nagios-plugins nagios-nrpe

Running NRPE under xinetd

Edit /etc/xinetd.d/nrpe for

disable         = no
only_from       = Nagios ServerIP

Restart xinetd service
# /etc/init.d/xinetd restart

Wednesday, August 3, 2011

My KVM howto

Enabling KVM support on your hardware

The host machines need to be running either Intel VT or AMD-V chipsets that support hardware-assisted

# grep -E 'vmx|svm' /proc/cpuinfo

If this command returns output, then your system supports KVM. The vmx CPU feature flag represents
Intel VT chipset while the svm flag represents AMD-V. Note which KVM flag was returned as it will be
useful for loading the right module later.

Next you need to ensure the KVM-related feature is enabled in the BIOS

1. In the BIOS menu, select Advanced Step → CPU Options.
2. Make sure the Intel Virtualization Technology option is Enabled.

Installing and configuring KVM-related package

Install the KVM software using yum:
# yum install kvm

Install additional virtualization management packages:

# yum install libvirt python-virtinst libvirt-python libvirt-client bridge-utils

Configuring KVM after installing the packages

After you have installed the KVM-related packages, load the right KVM modules by following these

1. Insert KVM module by running the following command:
# modprobe kvm

2. Insert the chip-specific KVM module by running one of these commands:
For the AMD chip (svm flag)
# modprobe kvm-amd

For Intel chip (vmx flag)
# modprobe kvm-intel

You can verify that the modules are inserted and running.
# lsmod|grep kvm
kvm-intel    86248 3
kvm     223264 1 kvm_intel

3. Start the libvirtd daemon service:
# /etc/init.d/libvirtd start

Starting libvirtd daemon:
# /etc/init.d/libvirtd status
libvirtd (pid 6584) is running.

4. Set up libvirtd to start on every reboot:
# chkconfig libvirtd on

5. Better to reboot the host machine.

Creating LVM for Guest OS (Run lvdisplay before creating new LVM)

# lvcreate -L 4G -n kvm5 vg_kvm

-L 10G = replace with filesystem size
-n kvm1 = replace with kvm#

Creating KVM guest using virt-install

virt-install --name kvm1 \
--ram 512 \
--os-type='linux' \
--disk path=/dev/mapper/vg_kvm-kvm1 \
--network network:default \
--accelerate \
--vnc \
--cdrom /iso/CentOS-5.5-x86_64-bin-DVD-1of2.iso

For network installation replace --cdrom with --location with URL to OS installation link.

Eg, Centos 5.6 64bit

Creating Scientific Linux 6 64bit vm with bridge networking and two virtual CPUs

Note: br0 bridge interface should exist before using it for KVM

virt-install --name kvm3 \
--ram 1024 \
--vcpus 2 \
--os-type='linux' \
--disk path=/dev/mapper/vg_kvm-kvm3 \
--network bridge:br0 \
--accelerate \
--vnc \

*** Installing Slackware off NFS (NFS running on machine)

virt-install --name slack64 \
--ram 512 \
--vcpus 1 \
--os-type='linux' \
--disk path=/dev/mapper/vg_kvm-kvm8 \
--network bridge:br0 \
--accelerate --vnc \
--cdrom /iso/slackware_x86_64-13.37-mini-install.iso

Cloning KVM VM
1. LVM based
Cloning KVM (Create LVM partition for cloned first ie 'lvcreate -L 4G -n kvmX vg_kvm' run lvdisplay to check existing LVMs)
virt-clone --original kvm1  \
             --name kvmX \
              --file /dev/mapper/vg_kvm-kvmX --prompt

2. File based
virt-clone --original kvm1  \
             --name kvmX \
              --file /var/lib/libvirt/images/centos6-2.img

Resizing KVM VM disk storage
1. install libguestfs-tools package.
2. Best to have separate parititons for /, /boot, and swap, then following

Troubleshooting For network interface

1. Open /etc/sysconfig/network-scripts/ifcfg-eth0 and delete the following line and save the file
2. Remove the following filen and reboot the virtual machine
After reboot eth0 network interface will be available.

Removing LVM partition: Remove LVM partition which is no longer need.

# lvchange -an /dev/mapper/vg_kvm-kvm8 (change LVM status to inactive)
# lvremove /dev/mapper/vg_kvm-kvm8 (remove the LVM paritition)