Booting LiveCD’s with Cobbler

As the size of my network grows it becomes increasingly more convenient to be able to boot a live CD rescue environment easily.  Even though most of the hosts are virtual and can be booted directly with an ISO I think it’s even better to PXE boot them directly from Cobbler since it’s already set up.  The documentation for booting live CD’s on Cobbler’s wiki did not seem to work for the Ubuntu Rescue Remix, it’s possible that some modifications need to be made for Ubuntu-based images (as the functionality provided in the livecd-iso-to-pxeboot script seems to be based on RedHat-style distros) but instead I used nfsroot which worked without issue.

First, you’ll want to mount your Live CD via loopback and run a Cobbler import:

# mount ubuntu-rescue-remix-12.04.iso /mnt -o loop
# cobbler import --name=ubuntu-rescue-remix-12.04 --path=/mnt --breed=ubuntu

Next I configured NFS, my configuration is a bit unique as I serve NFS from my file/VM server and Cobbler is a VM which has it’s own application files mounted via NFS as well:

# grep cobbler /etc/exports
/srv/nfs/cobbler <cobblervmip>(rw,async,subtree_check,no_root_squash)
/srv/nfs/cobbler/webroot/cobbler <dhcp-class-c>/24(ro,sync,subtree_check,no_wdelay,insecure_locks,no_root_squash,insecure)

Create the distro as usual and set the kernel options to boot from NFS:

# cobbler distro add --name=ubuntu-rescue-remix-12.04 --kernel=/var/www/cobbler/ks_mirror/ubuntu-rescue-remix-12.04/casper/vmlinuz --initrd=/var/www/cobbler/ks_mirror/ubuntu-rescue-remix-12.04/casper/initrd.gz
# cobbler distro edit --name=ubuntu-rescue-remix-12.04 --kopts='nfsroot=<cobbler IP>:/var/www/cobbler/ks_mirror/ubuntu-rescue-remix-12.04 ip=dhcp netboot=nfs boot=casper

There was one minor issue which I encountered while attempting to mount the nfsroot, “short read: 24 < 28“.  The only thing a Google search turned up was a posting which goes into a bit more detail on the source of the problem.  Apparently if you are have  configured a hosts.allow/deny file it will be an issue because the kernel will use NFSv3 for an nfsroot.  I mistakenly assumed NFS was working fine when I was booted up since my system was configured to use version 4.  Also, the tcp_wrapper hosts.allow/deny files do not recognize CIDR notation (unlike /etc/exports), you’ll have to use “192.168.0.” to specify a /24.

You will also need to add a profile to actually boot the live CD but that is straightforward. Other then that it should work without issue!

Chroot’ed BIND and Apparmor

Recently I set up a server running BIND on my network to serve as at alternative to updating host files for my VM’s…  previously I accomplished this via Puppet and it worked OK but needed to be changed.  There were some minor additions to the how-to to get it working for me in the chroot, most of them were related to Apparmor profiles but some libraries were needed as well.

These notes are assuming you’re running under Ubuntu 12.04 LTS, earlier releases (or later ones) will have different requirements.  I’m also using /var/chroot/named as the path for the chroot.

Once you create the chroot environment, you’ll need OpenSSL libraries:

# mkdir -p /var/chroot/named/usr/lib/x86_64-linux-gnu/openssl-1.0.0/engines
# cp /usr/lib/x86_64-linux-gnu/openssl-1.0.0/engines/ /var/chroot/named/usr/lib/x86_64-linux-gnu/openssl-1.0.0/engines

Then the Apparmor profile must be updated to reflect the chroot and library.  I just created a new file under ‘local’:

root@ns1:~# cat /etc/apparmor.d/local/usr.sbin.named 
# Site-specific additions and overrides for usr.sbin.named.
# For more details, please see /etc/apparmor.d/local/README.

# named chroot
/var/chroot/named/** r,
/var/chroot/named/etc/bind/** r,
/var/chroot/named/var/lib/bind/ rw,
/var/chroot/named/var/lib/bind/** rw,
/var/chroot/named/var/cache/bind/ rw,
/var/chroot/named/var/cache/bind/** rw,

/var/chroot/named/var/run/named/ w,
/var/chroot/named/var/run/named/session.key w,

/var/chroot/named/usr/lib/x86_64-linux-gnu/openssl-1.0.0/engines/ rm,

Don’t forget to restart Apparmor for the changes to take effect!

NFS root on SheevaPlug with Debian

I broke out my SheevaPlug the other day and decided to tinker with it a bit… recently I had some new projects ideas where it would be of use.  I also wanted to configure it with an NFS root to avoid flash wear issues and a way to easily back up the device.  This is an attempt at documenting the process in the hope it may help someone else with all the issues I ran into.  YMMV, damnum absque injuria, etc…

The plug had previously been sitting in my closet for some time, in fact it’s been so long that the Ubuntu Jaunty 9.04 install it came with was very outdated by this point.  Unfortunately the idea of upgrading to any more recent Ubuntu release will not work as they are compiled for newer ARM architectures; the CPU in the SheevaPlug is only an ARMv5 and 9.04 was the last version to support this core.  Ubuntu 9.10 was built for v6 and 10.04 and later target the Cortex ARMv7 core.

However we’re still in luck as Debian has an ARM port which supports Marvell’s Kirkwood platform.  And thanks to some wonderful work by Martin we have an easy to follow process for installing Squeeze on the SheevaPlug.  There’s a few caveats and issues to contend with and you will probably have to upgrade U-Boot (Marvell shipped their own variant which added support for plug computers but newer versions support it directly now).  This is all explained very well on Martin’s Debian installation page.  I recommend choosing the option for loading the installer via TFTP as you will need it later if you want to use NFS root.  I also initially installed Debian on an SD card, for some reason trying a USB drive did not work.

Once the installation is done, boot into your new Squeeze install and prepare an NFS mount on another box somewhere which will serve your rootfs.  I perused the DisklessUbuntuHowTo doc to get a basic idea of how it works, you’ll be doing something similar except the initrd update process will be different – this will be explained below.  Once your NFS share is created on the server side, mount it on the SheevaPlug and copy the files over (obviously replacing the IP’s as necessary):

mount -t nfs /mnt
cp -ax /. /mnt/.
cp -ax /boot/. /mnt/boot/.

The Ubuntu HOW-TO also lists copying /dev but this will not be needed as it is mounted via udev.  On my install /boot was a separate partition so make sure you include that as well.  Edit a few files in the new rootfs on the server-side to reflect a few things – /nfsroot/etc/network/interfaces to assign a static IP and /nfsroot/etc/fstab to replace your rootfs device with /dev/nfs.

You’ll also want to copy over the uImage and uInitrd files from /nfsroot/boot over to your TFTP folder as they’ll be needed to boot the plug.

Now comes the fun part, creating a new initrd.  Unfortunately the one installed by Debian did not support a NFS root so we need to build a new one.  Do this on the server-side as the new initrd will be copied to the TFTP folder as well.  The steps are relatively straight-forward but I recommend you read through the details first to get an better understanding.  The steps I outline are taken from a post on editing initrd’s and another on Debian SheevaPlug installation.

The process consists of the following: copy the kirkwood initrd image, extract it, update the initramfs.conf, re-compress, then rebuild it adding a special header for u-Boot.

~$ mkdir tmp

~$ cd tmp

~/tmp$ cp /home/exports/sheeva_squeeze_nfs_root/boot/initrd.img-2.6.32-5-kirkwood ./initrd.img-2.6.32-5-kirkwood.gz

~/tmp$ gunzip initrd.img-2.6.32-5-kirkwood.gz 

~/tmp$ mkdir initrd

~/tmp$ cd initrd/

~/tmp/initrd$ cpio -id < ../initrd.img-2.6.32-5-kirkwood
25329 blocks

~/tmp/initrd$ ls
bin  conf  etc  init  lib  sbin  scripts

At this point you’ll have the initrd extracted.  Edit the conf/initramfs.conf file and search for BOOT=local, change it to BOOT=nfs.

~/tmp/initrd$ vim conf/initramfs.conf

~/tmp/initrd$ find . | cpio --create --format='newc' > ../initrd.img-2.6.32-5-kirkwood-nfs
25329 blocks

~/tmp/initrd$ cd ..

~/tmp$ ls
initrd  initrd.img-2.6.32-5-kirkwood  initrd.img-2.6.32-5-kirkwood-nfs

~/tmp$ gzip -c initrd.img-2.6.32-5-kirkwood-nfs > initrd.img-2.6.32-5-kirkwood-nfs.gz

~/tmp$ ls
initrd  initrd.img-2.6.32-5-kirkwood  initrd.img-2.6.32-5-kirkwood-nfs  initrd.img-2.6.32-5-kirkwood-nfs.gz

~/tmp$ mkimage -A arm -O linux -T ramdisk -C gzip -a 0x00000000 -e 0x00000000 -n 'Debian ramdisk' -d initrd.img-2.6.32-5-kirkwood-nfs.gz initrd.img-2.6.32-5-kirkwood-nfs
Image Name:   Debian ramdisk
Created:      Thu Oct 20 20:43:31 2011
Image Type:   ARM Linux RAMDisk Image (gzip compressed)
Data Size:    5473115 Bytes = 5344.84 kB = 5.22 MB
Load Address: 0x00000000
Entry Point:  0x00000000

~/tmp$ ls
initrd  initrd.img-2.6.32-5-kirkwood  initrd.img-2.6.32-5-kirkwood-nfs  initrd.img-2.6.32-5-kirkwood-nfs.gz

~/tmp$ cp initrd.img-2.6.32-5-kirkwood-nfs /var/lib/tftpboot/uImage-nfsroot

Now that you have the new initrd created you’ll need to update your uBoot environment variables.  This can be quite complex in-and-of itself.  The previous u-Boot environment variables I had did not work with this new setup, my attempts at abstracting away most of the complexity into many variables did not seem to work, there were many instances where variables referenced in commands and other variables were not getting set properly.  I basically re-worked it from the ground up using sample material at DENX’s (developers of u-Boot) website.  Their manual has a lot of information, you’ll probably want to reference the CommandLineParsing, UBootCmdGroupEnvironment, and LinuxNfsRoot pages primarily.  The working copy of my u-Boot environment is listed below:

addip=setenv bootargs ${bootargs} ip=${ipaddr}:${serverip}:${gatewayip}:${netmask}:${hostname}:${netdev}:off panic=1
addmisc=setenv bootargs ${bootargs}
addtty=setenv bootargs ${bootargs} console=ttyS0,${baudrate}
boot_mmc=setenv bootargs $(bootargs_console); run bootcmd_mmc; bootm ${kernel_addr_r} ${fdt_addr_r}
boot_nfs=tftpboot ${kernel_addr_r} ${bootfile}; tftpboot ${fdt_addr_r} ${fdt_file}; run nfsargs addip addtty; bootm ${kernel_addr_r} ${fdt_addr_r}
bootargs=root=/dev/nfs rw nfsroot= ip= panic=1 console=ttyS0,115200
bootargs_console=console=ttyS0,115200 mtdparts=orion_nand:0x00100000@0x00000000(uBoot)ro,0x00400000@0x00100000(uImage),0x1fb00000@0x00500000(rootfs)
bootargs_nfs=$(bootargs_console) root=/dev/nfs rw nfsroot=$(serverip):$(rootpath) ip=$(ipaddr):$(serverip):$(gateway):$(netmask):sheeva:eth0:none
bootcmd=run boot_nfs
bootcmd_mmc=mmc init; ext2load mmc 0:1 0x00800000 /uImage; ext2load mmc 0:1 0x01100000 /uInitrd
bootcmd_nfs=tftpboot ${kernel_addr_r} ${bootfile}; tftpboot ${fdt_addr_r} ${fdt_file}
console=console=ttyS0,115200 mtdparts=nand_mtd:0x00100000@0x00000000(uBoot)ro,0x00400000@0x00100000(uImage),0x1fb00000@0x00500000(rootfs)
nfsargs=setenv bootargs root=/dev/nfs rw nfsroot=${serverip}:${rootpath}
x_bootargs=console=ttyS0,115200 mtdparts=orion_nand:512k(uboot),4m@1m(kernel),507m@5m(rootfs) rw
x_bootargs_root=ubi.mtd=2 root=ubi0:rootfs rootfstype=ubifs
x_bootcmd_kernel=nand read 0x6400000 0x100000 0x400000
x_bootcmd_sata=ide reset;
x_bootcmd_usb=usb start;

By default it’s set to boot from NFS via the bootcmd=run boot_nfs line.  If you’re having issues getting NFS root to work, you may want to remove the panic=1 from the addip variable, this way it can drop you to an initrd prompt.  The boot_mmc variable is adapted from Martin’s example.  You may need to modify these slightly if your SheevaPlug is different, specifically the bootargs_console variable – mtdparts reflects the flash partition setup on my plug, update if yours differs (I think these are the factory default settings), /proc/mtd will have this information in it if you boot again from the Debian install.  Also I changed it to orion_nand, I believe it used to be orion_mtd initially.

That’s basically it.   Hope this is of use to someone!

More sensor interfacing with the Bus Pirate

After interfacing the TMP102 temperature sensor recently with the Bus Pirate, the next step on my agenda was to integrate this with the Web Platform along with a HH10D humidity sensor.  Unfortunately I realised a bit late (entirely my fault rushing an order) that although the HH10D was an I2C sensor, the humidity was actually read through a frequency output pin; the I2C EEPROM was only for calibration data.  Bus Pirate to the rescue once again!

This, like the TMP102 is an 3.3V sensor and needs pullup resistors on the I2C bus to read the EEPROM.  I wasted spent a number of hours with this getting nothing but garbage data with the first sensor I tried (the calibration values were completely wrong), on the second try with a backup sensor I had much better luck, getting good results almost immediately.

We’ll be doing a similar setup to the TMP102: using I2C and turning on the power supplies but in this case the AUX pin should be set to HIGH-Z for input/reading.

HiZ> m
1. HiZ
2. 1-WIRE
4. I2C
5. SPI
6. 2WIRE
7. 3WIRE
9. LCD
x. exit(without change)

(1)> 4
Set speed:
 1. ~5KHz
 2. ~50KHz
 3. ~100KHz
 4. ~400KHz

(1)> 3
I2C> c
a/A/@ controls AUX pin
I2C> @
I2C> W
Power supplies ON
I2C> (1)
Searching I2C address space. Found devices at:
0xA2(0x51 W) 0xA3(0x51 R)

Good.  Now if you take a look at the datasheet for the M24C02 EEPROM to do a random address read you’ll need to send a start bit, device select (read only; R/W set to 0), address, then another start bit, device select again (but with R/W set to 1), then read commands.  The HH10D datasheet mentions there are 2 calibration factors we need to read: sensitivity and offset and each are 2 byte integers, making a total of 4 bytes.  These start at address EEPROM address 10.  Using the BP, we do the following:

I2C> [0xa2 0x0a [0xa3 r:4
READ: 0x01  ACK 0x8B  ACK 0x1E  ACK 0x32

This is in big-endian format so the sensitivity value (at address 0x10) reads 0x018B and the offset is 0x1E32.  We’ll also do a frequency read, the first as baseline with my AC running nearby and the second breathing on the sensor:

I2C> f
AUX Frequency:  autorange 7,169 Hz
I2C> f
AUX Frequency:  autorange 6,790 Hz

The formula in the datasheet states the RH = (offset-freq) * sensitivity/4096.  Plugging these in we get 54% and 90.6% RH which seems to match the other humidity sensor in my room.  Success!

Next step was getting this working on my Web Platform.  I have some very bare-bones code working with the TMP102 already, just need to work on the HH10D a bit more and I’ll post my example up here.

Interfacing sensors with the Bus Pirate

Spring has come and with it, storm season.  In light of the recent historic multi-day tornado outbreak I’ve decided it was time to replace my initial Arduino weather station with something a bit more capable.  The Arduino is a decent platform for a quick prototype, but I’ve been interested in PIC’s for some time and really would like some more experience with them.  I recently picked up a Web Platform by Dangerous Prototypes and was impressed with its features, thought it would make a great platform for my new weather station.  A quick order from SparkFun provided me with a few I²C sensors to experiment with.

Sidenote: The only downsides I thought of so far will be the inability to toss lots of sensors all on one bus.  The 1-wire protocol provides each sensor with a unique address, somewhat similar to a MAC.  However a basic weather station would only need a very few of these at the most (indoor/outdoor?) and this likely could easily be accomplished.  Most sensors I’ve seen are configurable to allow at least 2-4 devices per bus and if needed more then one I²C bus could always be implemented on the PIC.

I figured some tinkering with my Bus Pirate to interface the sensors first would be a good start.  This is a quick reference/demo for a few of SparkFun’s breakout sensors.  Some of this is adapted from the example code provided on their website.

TMP102 – Digital Temperature Sensor Breakout (SEN-09418)

This is a tiny I²C sensor which provides a great resolution (12 bits), accuracy (0.5° 2° C), and nice features including programmable high/low alerting temperatures and low power consumption.

The ADDR/A0 pin allows for configuring the slave address to allow up to 4 devices on the same bus.  It can be connected to either GND, V+, SDA or SCL; in my example here I’ve tied it to GND (using the AUX output of the Bus Pirate and setting it to low)  I must state that it’s important to tie it to something, left floating the sensor did not seem to operate correctly.  If using a different address reference the datasheet to see the necessary changes (Table 12).  Also keep in mind this sensor needs 3.6V MAX, not 5V!  Pullup resistors do not need to be enabled as they are already provided on the breakout board.

Set mode to I2C, set AUX low and turn on the power supplies:

HiZ> m
1. HiZ
2. 1-WIRE
4. I2C
5. SPI
6. 2WIRE
7. 3WIRE
9. LCD
x. exit(without change)

(1)> 4
Set speed:
 1. ~5KHz
 2. ~50KHz
 3. ~100KHz
 4. ~400KHz

(1)> 3
I2C> W
Power supplies ON
I2C> c
a/A/@ controls AUX pin
I2C> a
I2C> (1)
Searching I2C address space. Found devices at:
0x00(0x00 W) 0x90(0x48 W) 0x91(0x48 R)

To read the temperature you need to generate a start bit, send the device address, then specify the register to read (temperature read-only):

I2C> [0x90 0x00
I2C> [0x91 r r
READ: 0x19
READ:  ACK 0x60

Our temperature data is 0x1960.  The first byte is the MSB and the second LSB does not need to be read if not needed, in this case for a test it can be ignored – our temperature is 25°C (0x19).  After pressing my finger on the sensor for a few seconds taking a second reading results in a MSB of 0x1F – 31°C, it indeed seems to work just fine.


LVM Loopback HOW-TO

This is a simple tutorial on setting up LVM on loopback devices, I’ve used it a few times for creating dynamic virtual disks; it came in particularly handy when archiving NEXRAD radar data for my radarwatchd project – using up all your inodes on several hundreds of thousands of 15Kb files doesn’t sound like my idea of fun.  Creating a virtual volume with reiserfs was a particularly handy solution in this case.

First Steps

  • Create several empty files with dd.  These will hold your physical volumes:
# dd if=/dev/zero of=lvmtest0.img bs=100 count=1M
1048576+0 records in
1048576+0 records out
104857600 bytes (105 MB) copied, 8.69891 s, 12.1 MB/s
# dd if=/dev/zero of=lvmtest1.img bs=100 count=1M
1048576+0 records in
1048576+0 records out
104857600 bytes (105 MB) copied, 8.69891 s, 12.1 MB/s
  • Link them to loopback devices with losetup: (see below if you run out of loopback devices)
# losetup /dev/loop0 lvmtest0.img
# losetup /dev/loop1 lvmtest1.img
  • Partition them with fdisk.  Create a single primary partition, full size of the device and set the type to Linux LVM (0x8e).  Shorthand commands with fdisk: n,p,1,Enter,Enter,t,8e,w.  I’ve also automated this somewhat with sfdisk:
  • # sfdisk /dev/loop0 << EOF
  • Install and configure LVM if needed.  In this test I used the filter settings in lvm.conf to ensure only loopback devices will be used with LVM:
filter = [ "a|/dev/loop.*|", "r/.*/" ]
  • Initialize LVM:
# vgscan
Reading all physical volumes.  This may take a while...
# vgchange -a
  • Create physical volumes with pvcreate:
# pvcreate /dev/loop0 /dev/loop1
 Physical volume "/dev/loop0" successfully created
 Physical volume "/dev/loop1" successfully created
  • Create a volume group then extend to include the second and any subsequent physical volumes:
# vgcreate testvg /dev/loop0
 Volume group "testvg" successfully created
# vgextend testvg /dev/loop1
 Volume group "testvg" successfully extended
  • Verify the operation was successful:
# vgdisplay -v
 Finding all volume groups
 Finding volume group "testvg"
 --- Volume group ---
 VG Name               testvg
 System ID
 Format                lvm2
 Metadata Areas        2
 Metadata Sequence No  2
 VG Access             read/write
 VG Status             resizable
 MAX LV                0
 Cur LV                0
 Open LV               0
 Max PV                0
 Cur PV                2
 Act PV                2
 VG Size               192.00 MB
 PE Size               4.00 MB
 Total PE              48
 Alloc PE / Size       0 / 0
 Free  PE / Size       48 / 192.00 MB
 VG UUID               1Gmt3W-ivMH-mXQH-HswP-tjHR-9mAZ-917z0g

 --- Physical volumes ---
 PV Name               /dev/loop0
 PV UUID               X11MYK-u8hk-4R26-CHuy-QUSw-2hLq-Notlnc
 PV Status             allocatable
 Total PE / Free PE    24 / 24

 PV Name               /dev/loop1
 PV UUID               dLKXlz-c536-9Elj-C2zZ-B4aw-69kj-zZ7PuN
 PV Status             allocatable
 Total PE / Free PE    24 / 24
  • Create a logical volume.  Use the largest available size reported to use by vgdisplay (Free PE/Size value):
# lvcreate -L192M -ntest testvg
 Logical volume "test" created
  • Finally create a filesystem on the new logical volume.  We’re using reiserfs for this example here:
# mkfs.reiserfs /dev/mapper/testvg-test
mkfs.reiserfs 3.6.21 (2009

A pair of credits:
Alexander  Lyamin  keeps our hardware  running,  and was very  generous  to our
project in many little ways.

The  Defense  Advanced  Research  Projects Agency (DARPA, is the
primary sponsor of Reiser4.  DARPA  does  not  endorse  this project; it merely
sponsors it.

Guessing about desired format.. Kernel 2.6.31-17-generic-pae is running.
Format 3.6 with standard journal
Count of blocks on the device: 49152
Number of blocks consumed by mkreiserfs formatting process: 8213
Blocksize: 4096
Hash function used to sort names: "r5"
Journal Size 8193 blocks (first block 18)
Journal Max transaction length 1024
inode generation number: 0
UUID: d50559af-5078-4de5-812a-264590e60177
 ALL DATA WILL BE LOST ON '/dev/mapper/testvg-test'!
Continue (y/n):y
Initializing journal - 0%....20%....40%....60%....80%....100%
ReiserFS is successfully created on /dev/mapper/testvg-test.
  • Mount the volume and you should be done:
# mount /dev/mapper/testvg-test /mnt/virtlvm
# df
Filesystem           1K-blocks      Used Available Use% Mounted on

 196596     32840    163756  17% /mnt/virtlvm

Growing the LVM volume

To expand the LVM volume you will follow similar steps to the ones stated above:

  • Create another virtual disk and partition with dd and fdisk:
# dd if=/dev/zero of=lvmtest2.img bs=100 count=1M
# fdisk /dev/loop2
  • Tie new disk image to a free loopback device with losetup:
# losetup /dev/loop2 lvmtest2.img
  • Create physical volumes on the new device with pvcreate:
# pvcreate /dev/loop2
Physical volume "/dev/loop2" successfully created
  • Extend the volume group:
# vgextend testvg /dev/loop2
 Volume group "testvg" successfully extended
  • Extend the logical volume:
# lvextend /dev/mapper/testvg-test /dev/loop2
 Extending logical volume test to 288.00 MB
 Logical volume test successfully resized
  • Resize the filesystem with resize_reiserfs:
# resize_reiserfs /dev/mapper/testvg-test
resize_reiserfs 3.6.21 (2009

resize_reiserfs: On-line resizing finished successfully.

# df
Filesystem           1K-blocks      Used Available Use% Mounted on
                        294896     32840    262056  12% /mnt/virtlvm

Modifying the amount of loopback devices

  • To see all the loopback devices:
# ls -l /dev/loop*
  • Adding new loopback devices
  • If loopback support is compiled as a module:
# modprobe loop max_loop=64
  • To make permanent edit /etc/modules or /etc/modprobe.conf and use add one of the following:
loop max_loop=64
options loop max_loop=64
  • If loopback module is compiled into the kernel you will need to reboot, edit the kernel command-line parameters appending ‘max_loop=64’ to the end.
  • Loopback devices can also be dynamically created with mknod.  Loopback devices have a major number of 7 and minor number of the loopback device.  Keep in mind devices created this way will not be persistent and will disappear after a reboot:
# mknod -m660 /dev/loop8 b 7 8
# mknod -m660 /dev/loop9 b 7 9
# mknod -m660 /dev/loop10 b 7 10

That should cover mostly everything.  I’ve been informed that some of the steps may not be strictly necessary, such as the vgscan/vgchange.  I also believe the partitioning may not be needed as we’re using the full size of the virtual devices but it’s good practice nonetheless and definitely makes things clearer being able to see the LVM partitions.   Hope this helps!

Upgrading EOL Ubuntu installations

I have a number of Ubuntu boxes laying around and gotten a bit lazy keeping some of the lesser-used ones up to date.  I realized this after trying an apt-get update resulted in 404 errors, oops.  Since I couldn’t directly do a dist-upgrade I checked the Ubuntu wiki for upgrading EOL installations, the process is pretty simple.

All you basically need to do is update your /etc/apt/sources.list and replace (or whatever servers you are using) with, setting the release for your current distro correctly of course.  If it’s a desktop system you may need to install or ugprade update-manager package and/or ubuntu-desktop as well.  Then a simple aptitude update && aptitude safe-upgrade and do-release-upgrade should take care of your needs.  If you are multiple releases behind you will need to upgrade from one release to the next individually one at a time, you can’t skip directly to the latest so it may take some time.  Otherwise it’s pretty straightforward and from my experience thus far very pain-free which is always a plus.

Quick removal of comments from C/C++ code

Working on another post – writing an optimization HOW-TO with valgrind (using my radarwatchd project as an example) and unfortunately realized the code I have in my Google code repository is fairly out-of-sync with the latest as I was working on some significant patches.  The changes incorporated numerous performance enhancements but there was a lot of old code I simply commented out temporarily while developing the new source.  Needing a quick way to diff the two without seeing any comments I stumbled across a solution with Perl.

Pretty simple: I have the current code in a subdirectory ./radarwatchd and the old version from the Google repo in ./googlecode/radarwatchd.  I made a copy of the sources as to not modify the original files, then ran a Perl construct to perform the regex.  A simple diff wrapped things up.

find {,googlecode/}radarwatchd/cached/ \( -name '*.cpp' -o -name '*.h' \) -exec cp -v '{}' '{}.nocomments' \;
find {,googlecode/}radarwatchd/cached/ -name '*.nocomments' -exec perl -0777 -pi -e 's{//.*?$|/\*.*?\*/}{ }smg' '{}' \;
diff -NBwar {,googlecode/}radarwatchd/cached/*.cpp.nocomments

The Perl regex is not crazy complex and there are probably some instances where it doesn’t work quite right but for a simple search this works well.  The only downside I had with this is that the comments are replaced by a single space, so you’ll need to run diff with -w to ignore whitespace.  With this particular invocation you can’t replace it with nothing as the -exec will see that first {} as the place to put the filename (rather then the last) and cause all sorts of problems.


Remote monitoring with apticron and logcheck

I wanted to write a brief posting on some basic ways to help remotely administer Ubuntu/Debian boxes.  Over the past few months I’ve been tinkering with various methods of handling this and what I’ve come up with seems to work fairly well.  It basically consists of two applications: apticron, which monitors repositories for package updates, and logcheck, which monitors logs in for any security or other noteworthy entries.

Apticron is very easy to set up, it’s in the repositories and requires basically no configuration.  It will drop a script in /etc/cron.daily and that is about it, emailing any reports to root.  Of course this can be modified through a .forward or an entry in /etc/aliases.

Logcheck is fairly simple to set up as well – it is also in the repositories.  Once installed, edit the /etc/logcheck/logcheck.conf file to configure.  The first thing you will want to set is the REPORTLEVEL setting, options are “workstation”, “server” (default value), or “paranoid”.  I use server on mine, which gives a good amount of detail. I would advise against using paranoid unless the server is extremely locked down and users do not typically login.  Workstation is good for a desktop environment.  The only other variable I edited was SENDMAILTO.  Logcheck works by basically comparing each  logentry against a set of regular expressions and generate a report if it does not match.  I had to modify one or two regex’s slightly to fix false positives, if you want my changes just ask and I’ll send them over.

One other small gem I want to mention : gkrellm.  I use this on both my desktop and server, it is invaluable for providing real-time system performance metrics.  Sure, it does not have any logging capabilities and thus unsuitable in a large-scale environment but for keeping an eye on one or two boxes it fits the bill quite nicely.

Rsync script update

I’ve been using my rsync mirror script for a few weeks now and have implemented an additional one or two tweaks after deploying it on my desktop system as well.

  • Firstly, now you can you use the same script across multiple boxes – the UUIDs are configurable per hostname
  • A bug was fixed where the script would fail if the destination disks were not already mounted.
  • You can also customize the rsync invocation on a host-basis as well.  This was needed on my desktop machine where a /home account was mounted via NFS on a different file system causing IO errors and subsequently skipping the file deletion.

One final enhancement I want to add is the ability to spin the backup disk down after rsync is complete – this will not only help to increase drive life but also help reduce power use (however small it may be).  For some odd reason on my machines whenever I stop (umount, sync, then spin-down) an internal disk it works for a few seconds, then the drive spins back up again and I’m seeing ATA link reset messages as if it was just being plugged in.  External drives connected with eSATA seem to work just fine however.  Need to look into that more.

Also shortly available:

  • A similar but different script I use for syncing my RAID storage array with an backup external drive connected via eSATA.  It’s a bit of a hack in some spots, but the nice thing is that it is almost fully automated.  Use this on a machine with one of those eSATA docks and you have a good way of making a quick backup of an array or disk.
  • Sample service account script implementation with the ‘chattr’ command.

Download or wiki.