Archives for: 2011


Permalink 11:22:18 am, by mholmes, 173 words, 307 views   English (CA)
Categories: R & D, Activity log, Documentation, Documentation; Mins. worked: 30

XQuery for retrieving the latest modified date of any resource in a collection

We intend this as the first stage in implementing a script which can refresh a large db collection based on document dates, so only documents changed since the last-modified date are uploaded (perhaps after allowing a 24-hour cushion):

xquery version "1.0";

declare namespace local="";
declare namespace exist="";
declare namespace xmldb="";
import module namespace request="";

declare variable $inCol := request:get-parameter("col", "/db");
declare variable $startCol := if (starts-with($inCol, "/")) then $inCol else concat("/", $inCol);

declare function local:getLatest($col as xs:string) as xs:dateTime*
	let $dates :=local:getDocDates($col)
	return max($dates)

declare function local:getDocDates($col as xs:string) as xs:dateTime*
	let $result :=
		(for $c in xmldb:get-child-collections($col) return local:getDocDates(concat($col, '/', $c)),
        for $r in xmldb:get-child-resources($col) return xmldb:last-modified($col, $r)
	return $result

UPDATE: the next step is ready. See this post:


Permalink 05:14:18 pm, by mholmes, 11 words, 188 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 5

Getting rid of .svn dirs in a tree

Because I keep forgetting it:

find -name "\.svn" -exec rm -rf {} \;


Permalink 03:10:39 pm, by Greg, 463 words, 511 views   English (CA)
Categories: Activity log, Documentation; Mins. worked: 20

Install Oracle JDK on Ubuntu

The Oracle version of Java will no longer be available through repos and must be manually installed. Here's how I did it.
1) Download the latest JDK from here:
2) Unpack the jdk (right now it unpacks as a directory called jdk1.7.0_02) and unpack it in to the /usr/lib/jvm/ directory.
3) construct a text file called /usr/lib/jvm/.jdk1.7.0_02.jinfo and put this in it:


hl java /usr/lib/jvm/jdk1.7.0_02/jre/bin/java
hl keytool /usr/lib/jvm/jdk1.7.0_02/jre/bin/keytool
hl pack200 /usr/lib/jvm/jdk1.7.0_02/jre/bin/pack200
hl rmid /usr/lib/jvm/jdk1.7.0_02/jre/bin/rmid
hl rmiregistry /usr/lib/jvm/jdk1.7.0_02/jre/bin/rmiregistry
hl unpack200 /usr/lib/jvm/jdk1.7.0_02/jre/bin/unpack200
hl orbd /usr/lib/jvm/jdk1.7.0_02/jre/bin/orbd
hl servertool /usr/lib/jvm/jdk1.7.0_02/jre/bin/servertool
hl tnameserv /usr/lib/jvm/jdk1.7.0_02/jre/bin/tnameserv
hl jexec /usr/lib/jvm/jdk1.7.0_02/jre/lib/jexec
jre policytool /usr/lib/jvm/jdk1.7.0_02/jre/bin/policytool
jdk appletviewer /usr/lib/jvm/jdk1.7.0_02/bin/appletviewer
jdk apt /usr/lib/jvm/jdk1.7.0_02/bin/apt
jdk extcheck /usr/lib/jvm/jdk1.7.0_02/bin/extcheck
jdk idlj /usr/lib/jvm/jdk1.7.0_02/bin/idlj
jdk jar /usr/lib/jvm/jdk1.7.0_02/bin/jar
jdk jarsigner /usr/lib/jvm/jdk1.7.0_02/bin/jarsigner
jdk javac /usr/lib/jvm/jdk1.7.0_02/bin/javac
jdk javadoc /usr/lib/jvm/jdk1.7.0_02/bin/javadoc
jdk javah /usr/lib/jvm/jdk1.7.0_02/bin/javah
jdk javap /usr/lib/jvm/jdk1.7.0_02/bin/javap
jdk jconsole /usr/lib/jvm/jdk1.7.0_02/bin/jconsole
jdk jdb /usr/lib/jvm/jdk1.7.0_02/bin/jdb
jdk jhat /usr/lib/jvm/jdk1.7.0_02/bin/jhat
jdk jinfo /usr/lib/jvm/jdk1.7.0_02/bin/jinfo
jdk jmap /usr/lib/jvm/jdk1.7.0_02/bin/jmap
jdk jps /usr/lib/jvm/jdk1.7.0_02/bin/jps
jdk jrunscript /usr/lib/jvm/jdk1.7.0_02/bin/jrunscript
jdk jsadebugd /usr/lib/jvm/jdk1.7.0_02/bin/jsadebugd
jdk jstack /usr/lib/jvm/jdk1.7.0_02/bin/jstack
jdk jstat /usr/lib/jvm/jdk1.7.0_02/bin/jstat
jdk jstatd /usr/lib/jvm/jdk1.7.0_02/bin/jstatd
jdk native2ascii /usr/lib/jvm/jdk1.7.0_02/bin/native2ascii
jdk rmic /usr/lib/jvm/jdk1.7.0_02/bin/rmic
jdk schemagen /usr/lib/jvm/jdk1.7.0_02/bin/schemagen
jdk serialver /usr/lib/jvm/jdk1.7.0_02/bin/serialver
jdk wsgen /usr/lib/jvm/jdk1.7.0_02/bin/wsgen
jdk wsimport /usr/lib/jvm/jdk1.7.0_02/bin/wsimport
jdk xjc /usr/lib/jvm/jdk1.7.0_02/bin/xjc
plugin /usr/lib/jvm/jdk1.7.0_02/jre/lib/amd64/

4) run "sudo update-alternatives --config java" and choose jdk1.7.0_02
5) check that it's your current java by running "java -version" - you should get something like this back:
java version "1.7.0_02"
Java(TM) SE Runtime Environment (build 1.7.0_02-b13)
Java HotSpot(TM) 64-Bit Server VM (build 22.0-b10, mixed mode)


Permalink 01:31:50 pm, by mholmes, 131 words, 220 views   English (CA)
Categories: Servers, Activity log, Documentation; Mins. worked: 120

Setting up a Jenkins server to build eXist dist-war

As of today, builds are working on Plum. We learned:

  • For some reason https: connections to the SF repo fail, but http: works. No idea why, but teiJenkins has working https:, so we should be able to figure it out.
  • Log parse rules for the Console Log Parser have to be specified with a full absolute path.
  • Log parse rules take some working out, because of mentions of "error" instead of use of it, and many javac warnings we don't care about.
  • When the build artifact(s) you care about have unpredictable file names (as in the case of ours, which has the SVN rev number built into its filename), it can be tricky to pull them or push them somewhere else to make them available. We're still working on that.


Permalink 04:05:38 pm, by Greg, 392 words, 142 views   English (CA)
Categories: R & D; Mins. worked: 0

Setting up a Mac for use in HCMC labs

We need to create a default profile for users so we don't have to add software licenses and so forth every time we have someone new start work.

Default profile
Add all apps and launch each one of them as an admin user. If you don't you can have problems launching them as a standard user.
Create a user called "default" - do not set a password.
Log in as "default"
Launch all apps and configure them appropriately. Clear the cache, history & cookies in each browser.
Clear the "default" user's caches. Go to /Users/default/Library/Caches & delete all the files in this folder.
When the set up looks good open the "Keychain Access" utility. Select "Login" from the "Keychains" area then select the "File" menu and chose "Delete Keychain login".
Log out and back in as an admin.

Note that the following *only* applies to the English profile. I assume you have similar work to do for each profile you work on.
To set our new profile as the default, open a terminal and
Become root: sudo su
Head over to the profiles directory: cd /System/Library/User\ Template/
Back up the default profile: tar czvf stock_profile.tar.gz /System/Library/User\ Template/English.lproj/
Delete the default profile directory contents: rm -rf /System/Library/User\ Template/English.lproj/*
Copy your fresh profile over: cp -R /Users/default/* /System/Library/User\ Template/English.lproj/
NOTE: make sure that this copies over any hidden directories and files
This process *will* cause permissions on the default profile to be hosed, so restart the machine and run "Repair Disk Permissions" from the "Disk Utility" app

oXygen XML editor
Set oXygen vmoptions to be a bit more useful - you may need to change the -Xmx2048m to something lower if you don't have enough RAM. Run this sed one-liner to make the changes inline:
sudo sed -i 's/-Xss650K -Xms128m -Xmx512m -XX:SoftRefLRUPolicyMSPerMB=10 -XX:MaxPermSize=256m/-Xss64m -Xms128m -Xmx2048m -XX:SoftRefLRUPolicyMSPerMB=10 -XX:MaxPermSize=512m -XX:+UseParallelGC/g' /Applications/oxygen/

Corporate bling/optimizing
Default login screen wallpaper is here: /System/Library/CoreServices/DefaultDesktop.jpg - replace it with a branded UVic jpg

Turn off Dashboard, it's a waste of RAM: sudo defaults write /Library/Preferences/ mcx-disabled -boolean YES


Permalink 09:06:09 am, by mholmes, 135 words, 389 views   English (CA)
Categories: Servers, Activity log, Activity log, Documentation, Documentation; Mins. worked: 30

teiJenkins machine: enabling unattended upgrades

I was following these instructions to enable unattended upgrades to Lucid, and I hit a known bug with the unattended-upgrades package: the default runlevels for Start and Stop are wrong. I found this page, which suggested the following fixes, which worked.

Symptom: after installing the unattended-upgrades packages, you get this:

update-rc.d: warning: unattended-upgrades start runlevel arguments (none) do not match LSB Default-Start values (0 6)
update-rc.d: warning: unattended-upgrades stop runlevel arguments (0 6) do not match LSB Default-Stop values (none)


sed -i 's/Default-Start:[\t ]*0 6/Default-Start:/' /etc/init.d/unattended-upgrades

sed -i 's/Default-Stop:/Default-Stop:\t\t0 6/' /etc/init.d/unattended-upgrades 

dpkg-reconfigure -u unattended-upgrades

Following that, I edited /etc/apt/apt.conf.d/50unattended-upgrades to enable both security and regular updates, and to allow reboots when required. Jenkins seems to cope OK with reboots.


Permalink 01:55:05 pm, by mholmes, 539 words, 231 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 180

Displaying SVG logos on HTML5 pages

Today I made a start on figuring out the best way to put linked SVG logos on web page. The test document was a new HCMC logo comprising the UVic shield and the HCMC name. This is what I had to do to the SVG image:

  • Create a group for each part of the logo, to make it linkable.
  • (In Inkscape) right-click on the group, and choose Create Link.
  • Right-click again, and choose Link Properties.
  • Fill in the @xlink:href, @xlink:title, and @target attributes. If you don't include @target="_top", then the link will replace the SVG image with the linked page. That's usually not what you want.
  • Create white boxes (in this case, because the image BG is white) over the top of each group; link them in a similar way, and then send them to the bottom. If you don't do this, the link only work when the mouse is over the actual content of the element (the text, in this case); in between letters, there's no link.
  • In the root SVG element, set @viewBox="0, 0, [width], [height]", where [width] and [height] are the original @width and @height attribute values from the same element.
  • Add @preserveAspectRatio="xMinYMin meet" to the root <svg> element.
  • Set the @width and @height attributes of the root <svg> element to the pixel values you want the image to show up with. These are naked integers (no 'px' or other units). (This is unfortunately necessary.)
  • Clean any cruft you can out of the SVG file. I was able to remove some old @style attributes which had font info in them, even though the text had already been converted to paths.

That's it for the SVG. Now, to insert it into the page. In this case, I wanted it centred, which proved annoyingly tricky.

  • Create a <div> element with width and height set to the desired pixel-width of the image (as already specified in the SVG file), in the @style attribute, and also include margin-left: auto and margin-right: auto.
  • Create an <object> tag inside it, with the same width and height settings, with the @type attribute set to "image/svg+xml" and the @data attribute pointing at the SVG file.
  • Include some appropriate fallback content in the <object> tag, in case the image can't be displayed.

The results (for the Map of London project footer) look like this:

  <div style="text-align: center; margin-left: auto; margin-right: auto; width: 237px; height: 40px;">
    <object type="image/svg+xml" data="images/hcmc_logo_linked.svg" style="width: 237px; height: 40px;">
    <a href="">
      <img src="images/hcmc_logo.gif" alt="Humanities Computing and Media Centre" width="163" height="39" /></a>      
      <a href="">
      <img src="images/uvic_logo.gif" alt="University of Victoria" width="104" height="40" /></a>

This is tested and working on Firefox, Safari, Opera, IE9 and Chrome. Chrome has a bug which changes the linked bit into a black square once it's "visited", but that's a known webkit bug which has just been fixed, so Chrome should inherit the fix soon.


Permalink 10:39:21 am, by mholmes, 93 words, 97 views   English (CA)
Categories: R & D, Activity log; Mins. worked: 15

Tree conflicts in SVN

For some reason I keep getting tree conflicts in the Moses SVN. This is the only way I've found to resolve them:

svn resolve --accept working -R .

You need to do this from one directory above the level of the actual conflict. I'm still trying to figure out why this keeps happening, but it may be something to do with the fact that the project was dormant for a good while, during which time the SVN remained unchanged, but my working copy was affected by a crash and restore of my home directories.


Permalink 01:20:58 pm, by Greg, 1459 words, 6310 views   English (CA)
Categories: Servers, Activity log, Documentation; Mins. worked: 240

Setting up a local apt-mirror (updated for precise)

In order to avoid bottlenecks I rebuilt an old machine and set it up as an apt mirror of ubuntu, virtualbox, google chrome and google earth repos. Here's how:
Install apt-mirror 'sudo apt-get install apt-mirror'
Configure apt-mirror at /etc/apt/mirror.list

My mirror.list looks like this:
############# config ##################
set base_path /mnt/repo/apt-mirror
set mirror_path $base_path/mirror
set skel_path $base_path/skel
set var_path $base_path/var
set cleanscript $var_path/
set defaultarch amd64
set postmirror_script $var_path/
# set run_postmirror 0
set nthreads 20
set _tilde 0
############# end config ##############

deb precise main restricted universe multiverse
deb precise-security main restricted universe multiverse
deb precise-updates main restricted universe multiverse
deb precise-proposed main restricted universe multiverse
deb precise-backports main restricted universe multiverse
# if you want to use the mirror as an install base (using, say, the mini.iso image) you need to mirror the debian-installer directories.
deb precise main main/debian-installer restricted restricted/debian-installer universe universe/debian-installer multiverse multiverse/debian-installer
deb precise partner
deb precise main

deb precise contrib

deb stable main
deb stable main

#### Multiarch (oneiric and newer) mirrors need to contain both architectures ####
deb-i386 precise main restricted universe multiverse
deb-i386 precise-security main restricted universe multiverse
deb-i386 precise-updates main restricted universe multiverse
deb-i386 precise-proposed main restricted universe multiverse
deb-i386 precise-backports main restricted universe multiverse

deb-i386 precise partner
deb-i386 precise main

deb-i386 precise contrib

deb-i386 stable main
deb-i386 stable main


Notice that I'm mirroring i386 trees even though I only build 64bit machines - as of oneiric (11.10) Ubuntu has gone to a multiarch system, so this is no longer optional.

Note that in order to configure the mirroring of repos that uses gpg keys (like virtualbox and google), you need to add the key to the apt-mirror machine. Do it like this:
wget -q -O - | sudo apt-key add - &&
wget -q -O - | sudo apt-key add - &&

I want it to autoupdate every day, so I edited /etc/cron.d/apt-mirror (uncommented the cronjob line in the file and pointed the log where I wanted to store them).

Each time the mirror job runs it tries to run the postmirror script whether you uncomment the path to it in the mirror.list or not. Oddly, the script doesn't ship with the apt-mirror package, so I had to create it myself. I put it where the mirror.list expects it - in this case at /mnt/repo/apt-mirror/var/ - and put this in the script itself:
This ensures that the cleanup script gets run (it doesn't seem to run automatically).

The apt-mirror requires 'mirror', 'var' and 'skel' directories, so if you're setting this up on a separate drive you'll need to add them on the storage drive 'sudo mkdir /mnt/repo/apt-mirror /mnt/repo/apt-mirror/var /mnt/repo/apt-mirror/skel'. Note that the script wants to run as the apt-mirror user, so we need to change the ownership on those directories 'sudo chown -R apt-mirror:apt-mirror /mnt/repo/apt-mirror'.

I need it readily accessible to clients, so I installed apache 'sudo apt-get install apache2' and added symlinks to the appropriate directories e.g. 'ln -s /mnt/storage/mirror/ /var/www/ubuntu' - do that for each repo you're mirroring. I made them like this:
sudo ln -s /mnt/repo/apt-mirror/mirror/ /var/www/canonical
sudo ln -s /mnt/repo/apt-mirror/mirror/ /var/www/extras
sudo ln -s /mnt/repo/apt-mirror/mirror/ /var/www/google
sudo ln -s /mnt/repo/apt-mirror/mirror/ /var/www/ubuntu
sudo ln -s /mnt/repo/apt-mirror/mirror/ /var/www/virtualbox

Now that everything is configured, go ahead and fill up the new repo. Remember to run the job as the apt-mirror user 'sudo su - apt-mirror -c apt-mirror' - my precise repo takes up ~95GB of disk.

Configure clients to point at the new machine - remember that the http request needs to point to your symlinked directories in /var/www. Here is the sources.list I'm using on our lab machines (note the correlation between symlinks and sources lines):

deb precise main restricted universe multiverse
deb precise-updates main restricted universe multiverse
deb precise-backports main restricted universe multiverse
deb precise-security main restricted universe multiverse
deb precise partner
deb precise main
deb stable main
deb stable main
deb precise contrib

I've also built a few packages of my own and am making them available from a private repo, which I set up using a combination of these instructions:

UPDATE: I just added raring (Ubuntu 13.04) to the mix by copy/pasting the appropriate bits above - although I did not duplicate repos that are not distro-dependant (e.g. google). I also removed openshot as it appears to be up-to-date in the 13.04 repo.
I did nothing else to add the new mirror set, and the mirror seems to be building right now. So, adding a new set seems to be as simple as that!

UPDATE: I've noticed that several sub-directories in the ubuntu repo do not get mirrored. It turns out that they are treated differently, so the behaviour is expected.
I'm getting around this by rsync-ing the directories in the postmirror script (apt-mirror/var/ I've added rsync commands for all the directories I'm interested in for precise (12.04), which is actually quite a few, and the command I'm using looks like this:

rsync --recursive --times --links --hard-links --delete --delete-after rsync:// /mnt/repo/apt-mirror/mirror/

I'm specifically mirroring the main/debian-installer, main/dist-upgrader-all, main/i18n, main/installer-amd64, main/installer-i386, main/uefi directories (where they exist) in all of the various 'precise' sub-directories (like precise-updates and so forth).

This unfortunately does not mirror udebs (necessary for tasks like netboot installs), so I had to write a script to get them. I've added it to The standalone version looks like this:

# modified version of this:

# Mirrors with some serious speed
# Canadian: (20 Gbps)
# German: (20 Gbps)
# Irish: (10 Gbps)

# Mirrors nearby
# UBC: (1 Gbps)


    for UDEB in $(wget --quiet -O - $UDEBMIRROR/debian-installer/binary-amd64/Packages.bz2 | bzip2 -cd | sed -ne 's/^Filename: //p') ; do
        if [ ! -e $UDEB ];then
		curl $MIRROR/"$UDEB" --create-dirs --silent -o $LOCALPOOL/$UDEB

# adjust perms
chown -R apt-mirror:apt-mirror $LOCALPOOL
chmod -R 775 $LOCALPOOL


Permalink 03:19:24 pm, by mholmes, 38 words, 110 views   English (CA)
Categories: R & D, Activity log; Mins. worked: 45

Updated and tested eXist build script

We have more info about build targets that have to be run prior to the main build to get the correct version information into the app, so I've updated and tested my build script for an eXist WAR.


Permalink 09:52:29 am, by Greg, 181 words, 466 views   English (CA)
Categories: Announcements; Mins. worked: 0

Command Line Reminders

Searching with find
Example: find all jpg images from this location and copy them to another location - and don't change permissions or timestamps.
find . -iname '*.jpg' -exec cp -p {} ~/Desktop/test/ \;
find . -iname '*.php' -exec chmod 700 {} \;
which will find all php files from here and change permissions on them to conform to the whole suPHP thing (only user-readable php files etc.)

Another one:
find . -type l -name '*.png' -exec mv {} ~/Desktop/crap/ \;
will find all of the symlink-ed png files and move them somewhere

One that matches all text files on the entire system (that is, searching recursively from /) over 10KB, owned by paul, that are not readable by other users, and then use chmod to enable reading, like this:

find / -name "*.txt" -size +10k -user paul -not -perm +o=r -exec chmod o+r {} \;

Find all recently modified files:
find . -type f -printf '%TY-%Tm-%Td %TT %p\n' | sort

NOTE: When typing -exec parameters, be sure to include a space before \;. Otherwise, you might see an error such as missing argument to ´-exec'.


Permalink 10:31:27 am, by Greg, 93 words, 163 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 30

SVN and Gnome Keyring

We had a problem on beet where a local user had Gnome Keyring issues when trying to commit changes to svn. It *appears* that if the local user name and the remote user name are the same, but have different passwords, things can get a bit weird. We deleted the local user's keyring, but the problem re-occurred on reboot.
When the user logged in to beet with their netlink id the problem did not occur.

Note to self - do not provision local users that have the same name as the project itself.


Permalink 08:34:05 am, by Greg, 139 words, 204 views   English (CA)
Categories: Labs, Activity log; Mins. worked: 1500

New machine deployed

Made an attempt at using my maverick script to build the new machine (beet) but there were so many changes (package dependencies, config deprecations etc.) that I gave up and started on an oneiric builder - A LOT HAS CHANGED under the hood!

Spent a few days working on that and deployed beet with the new HCMC oneiric build. It seems OK, but still requires a few tweaks after a person logs in. I have a script on the desktop that will set things up and delete itself when done. One apparent bug that I'm tracking down: one cannot shut the machine down from the login screen. I think it's a lightdm problem at this point.

I'm continuing work on the oneiric build in preparation for pangolin as I intend to upgrade the lab to pangolin once it lands.

Permalink 08:27:48 am, by Greg, 64 words, 186 views   English (CA)
Categories: Servers, Activity log; Mins. worked: 15

ISE logs

Sysadmin notified us that there were ISE logs filling up viola's filesystem and needed to be removed. I had them (all 4.9GB) moved to /home1t/hcmc/iselogs. We need to get the log4j system on the new viola set up to rotate logs effectively (all the while ensuring that urchin is gobbling up the current logs and making stats available to MB).


Permalink 02:25:47 pm, by mholmes, 13 words, 374 views   English (CA)
Categories: Servers, R & D, Activity log, Tasks; Mins. worked: 10

Add curl to Jenkins build script

SR has added something thatrequires curl, so add it to the build script.


Permalink 01:32:57 pm, by mholmes, 91 words, 150 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 40

Using eXist to query the TEI Guidelines XML

I'm finding it useful to attack the TEI Guidelines with XQuery once in a while, to dig out and format stuff for review and discussion. However, you can't just load the P5 source into eXist, because you'll have problems with entities and have to mess with DTD catalogues. Instead, you can just expand the guidelines-en.xml file to pull in all the content referenced through entities, like this:

.../P5/Source$ xmllint --noent guidelines-en.xml --output guidelines-expanded-en.xml

That gives you a 6MB file you can upload into eXist and query easily.


Permalink 09:46:07 am, by mholmes, 270 words, 668 views   English (CA)
Categories: Servers, R & D, Activity log, Activity log, Documentation, Documentation; Mins. worked: 40

Migrating from Tomcat 6 to Tomcat 7: problem with Cocoon solved

We have many Cocoon + eXist projects currently running on Tomcat 6 on Pear. At some point, they'll migrate to Tomcat 7, so I've been using Tomcat 7 on my desktop to see what problems emerge. Here's one of them:

When I started Tomcat 7 with the Mariage project running in it, the site worked, except for the home page, which showed an error to the effect that "index.html" was not there. The site doesn't use index.html at all; its sitemap matcher for the home/root looks like this:

<map:match pattern="">

and all the site links point to the directory root. I eventually determined that this was caused by a change in the way Tomcat handles default "welcome files". If you look in Tomcat's conf/web.xml, you'll see this:


Obviously, there's no handler for an empty directory root; Tomcat 7, when presented with a directory root, goes to this list to see what to pass to the webapp. This list is identical in Tomcat 6, so it's clearly a change in the logic of the handler rather than the default configuration.

First I tried adding an empty <welcome-file> element to the top of the list, but that didn't work, so I commented out all three of the <welcome-file> elements, and then normal service was resumed. Incidentally, you have to restart Tomcat between changes to web.xml to cause them to have an effect (according to the web).


Permalink 11:57:04 am, by mholmes, 284 words, 181 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 120

Building a new standalone eXist war file (dist-war)

If this is a new setup and you've never checked out eXist before, then:

Otherwise just go into exist_trunk and do

  • svn update
  • cd eXist
  • ./ clean
  • ./ download-additional-jars
  • cd extensions
  • cp
  • [Check to make sure FOP is turned on. It's turned on by default at the moment. **FYI: just checked out rev 15484 (Oct 29, 2011) and FOP is turned off in]
  • cd ../
  • ./

NOTE: if the build fails because of an error related to a Ant and Ivy, then download this file: and put it in eXist/tools/ant/lib/

  • ./ -f build/scripts/jarsigner.xml
  • ./ dist-war

Next, if you're going to store your XQuery and the rest of your application logic in the database, don't forget to edit WEB-INF/controller-config.xml to make sure requests in the webapp root are handled by your controller file. For example, I added this:

<!-- MDH: Application site configuration. -->
  <root pattern="/*" path="xmldb:exist:///db/site"/>

to tell eXist that my controller.xql lives in /db/site, and should be handling every request to the webapp from the root down.

One other fix proved to be necessary: my application serves images out of an /images/ subcollection, but there's a line in the default controller-config.xml which blocks the handler in my controller.xql. I had to comment it out:

<!-- MDH: Commented this out. -->
  <!-- <forward pattern="/images" servlet="ScaleImageJAI"/> -->


Permalink 01:22:32 pm, by Greg, 120 words, 420 views   English (CA)
Categories: Labs, R & D, Documentation, Documentation, Announcements; Mins. worked: 0

Adding eXist as an oXygen Data Source

To easily edit docs inside of a running eXist DB
Two step process: (1)add a data source; (2)add a connection
1.1) Go to Options -> Preferences -> Data Sources
1.2a) Add a new data source in the top panel (titled Data Sources)
1.2b) Type: eXist (add the following driver files from the WEB-INF directory of your eXist instance: exist.jar, ws-commons-util-1.0.2.jar, xmldb.jar, xmlrpc-client-3.1.3.jar, xmlrpc-common-3.1.3.jar)

2.1) Choose the just-created Data Source
2.2a) Add something like this to the XML DB URI field: xmldb:exist://localhost:8080/exist/xmlrpc
2.2b) Add login info to User and Password fields
2.2c) Provide an internal path to your collection (like "/db")

Use Window -> Show View -> Data Source Explorer to browse the db.


Permalink 10:12:34 am, by mholmes, 232 words, 114 views   English (CA)
Categories: Servers, Activity log, Documentation; Mins. worked: 75

Meeting with KL and RE to discuss future plans for servers

These are the resulting plans:

  • SVN and TRAC will be moved from Lettuce to a new VM. RE will take care of that by the end of November. That machine will be managed by Sysadmin, although we will have admin over SVN (creation of repos, management of users).
  • Greg will check out whether the VMWare license allows the use of a VM as a member of the cluster. If this is allowed, RE will build a VM and add it to the cluster. Sysadmin will manage this VM too, along with the other cluster nodes.
  • Map of London will be ported by the end of November (excepting the experimental map, which will be rebuilt by Greg and Martin later).
  • Lettuce will be brought down during the first half of December, rebuilt, and added to the cluster.
  • The new DB server will be purchased soon, and built up with the latest RedHat, PostgreSQL and MySQL. We shall call it Orange.
  • Project data will be ported from Cress to the new DB server early in the new year.
  • Chard will be rebuilt as the backup for Pear, and we will write scripts to keep it in sync.
  • Cress will be rebuilt as the "warm mirror" for the new db server.
  • Arugula will need to be upgraded from Redhat 4 to Redhat 5 or 6 64-bit at some point; Sysadmin will warn us in advance of outages etc.


Permalink 09:53:09 am, by mholmes, 43 words, 221 views   English (CA)
Categories: R & D, Activity log, Activity log, Documentation, Documentation; Mins. worked: 10

Starting Tomcat with UTF-8

To make sure Tomcat starts with UTF-8 instead of the default, I put a script in [tomcat]/bin called

./ dFile.encoding="UTF-8"

Running this instead of the standard helps to avoid character encoding issues.


Permalink 11:29:27 am, by mholmes, 63 words, 130 views   English (CA)
Categories: Servers, Activity log, Activity log, Documentation; Mins. worked: 15

How to know when a headless server needs a reboot after updates

Learned something I didn't know today. When you run updates on an Ubuntu headless server, it doesn't tell you when a reboot is required; instead, it creates this file:




Check for the existence of that file. If it's there, do a reboot. It'll be removed during the reboot. I learned this when doing updates to teijenkins.


Permalink 01:52:31 pm, by sarneil, 68 words, 737 views   English (CA)
Categories: Activity log; Mins. worked: 5

show hidden files in Finder GUI in Mac OS Lion

To show hidden files in Finder windows (e.g. .files and the Library folder)

launch the terminal and enter: defaults write AppleShowAllFiles TRUE
hold option, click on Finder icon in dock
from popup menu, choose relaunch

to hide hidden files
launch the terminal and enter: defaults write AppleShowAllFiles FALSE
hold option, click on Finder icon in dock
from popup menu, choose relaunch


Permalink 04:31:05 pm, by sarneil, 115 words, 151 views   English (CA)
Categories: Activity log; Mins. worked: 240

get set up on new iMac

Over the past week or so, I've
- bought a new iMac, extra RAM and Applecare
- migrated my personal account from the old iMac to the new one (the last "3 minutes remaining" to almost an hour)
- configured various apps (e.g. create an IMAP connection rather than a POP connection in Thunderbird) and hardware (magic trackpad thingee, and possible bug with the mapping of the command and option keys on the split keyboard on startup not being what I set them to in the Systems Preferences/keyboards/modifier keys)
- had to get AW to come in with a mondo-big screwdriver to open the hatch for the additional RAM cards when they arrived.


Permalink 02:22:32 pm, by mholmes, 48 words, 119 views   English (CA)
Categories: Activity log; Mins. worked: 20

Deleted unused MySQL databases

Backing up my regular dbs, I noticed there are lots of unused ones, which may impact performance a bit. In consultation with GN, deleted a handful of obsolete, unused, and test databases from MySQL on TAPoR, including a Joomla, a Drupal, a Moodle and an empty B2Evo.


Permalink 04:33:54 pm, by mholmes, 64 words, 106 views   English (CA)
Categories: Activity log, Documentation; Mins. worked: 60

SnowballAnalyzer now available in eXist trunk

Heard from AR that he's now integrated support for the Lucene SnowballAnalyzer into eXist trunk, so I built a WAR from SVN to test this with. I've added lucene-snowball-2.9.2.jar (to match the Lucene version in trunk) to WEB-INF/lib, and I'm going to start investigating both this and the way controller.xql works, ahead of porting Map of London to the latest eXist.


Permalink 09:00:42 am, by mholmes, 96 words, 159 views   English (CA)
Categories: Servers, Activity log; Mins. worked: 15

Jenkins bug resolved

For a little while now, the Stylesheets task on has been showing as unstable, because of this warning:

Warning: XML resolver not found; external catalogs will be ignored

SPQR found the solution: my /var/lib/jenkins/hudson-log-parse-rules file was out of date. The most recent version in SVN has a line suppressing that warning. This seems to be the fix:

cd /var/lib/jenkins
sudo svn export
sudo chown jenkins hudson-log-parse-rules

Running a build right now to see if it's fixed.


Permalink 02:15:30 pm, by mholmes, 73 words, 99 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 40

LibreOffice problem on Spartan

EG-B reported that the LibreOffice install on Spartan was broken; it couldn't find the installed Apple JRE, and wouldn't do even simple operations such as copy and paste, complaining that it needed the JRE to do them. Showing it where the JRE was didn't help either. The version installed was a 3.3 beta from last October, so I ended up deleting it and replacing it with the latest release, which seems to work OK.


Permalink 04:05:21 pm, by Greg, 101 words, 122 views   English (CA)
Categories: Servers, R & D; Mins. worked: 0

New hardware

We now have a replacement for rutabaga. I'll be copying the data to secondary storage first so I can use the enterprise drives out of rutabaga in the new box. We also purchased some monitors and a new linux box like the last 2. I'll be up to my ears in hardware tasks for at least the rest of the week. Still to do: Build RAID 6 array in new NAS and transfer all data over. Build new linux box with my script. Move furniture and computers around to accommodate the new machine. Clean up the huge mess I've generated in the meantime.
Permalink 04:00:25 pm, by Greg, 92 words, 121 views   English (CA)
Categories: Servers, Activity log; Mins. worked: 45

More rutabaga

I'm now having trouble reading anything from the array. I'm hoping it's rutabaga being flaky and not the drives.
I have a complete copy of the data from a couple of weeks ago, so it shouldn't be a big deal if I can't continue with my backup.
Because all the drives I intend to use in the new NAS are occupied I'm going to need to move stuff around a bit circuitously. I'll start on that tomorrow seeing as how the copy I've been running for the last 4 hours has now crapped-out!


Permalink 11:31:27 am, by Greg, 81 words, 1354 views   English (CA)
Categories: Servers, Activity log; Mins. worked: 90

failed drive in rutabaga 3

It appears as though every time rutabaga is powered down the RAID array loses its marbles.
This time the new drive got dropped from the array. In fact, the drive didn't even show up in 3 reboots. I just powered up the machine and now it *does* show up. So I went through the same process as previously described, and I'm rebuilding the array right now. So far all appears OK.
UPDATE: array is now rebuilt (15:20) and all appears to be fine.

Permalink 10:42:41 am, by Greg, 63 words, 1361 views   English (CA)
Categories: Activity log; Mins. worked: 5

eSATAp cables

Although all of our new-ish Linux cubes have eSATAp connectors on them, I've had a hard time sourcing a cable that will allow me to connect an un-enclosed drive to the eSATAp port. I eventually found this:, for $20.
I checked with monoprice live help; they don't have them but they say they'll look at getting them.


Permalink 10:37:49 am, by Greg, 37 words, 1379 views   English (CA)
Categories: Servers, Activity log; Mins. worked: 10

Tomcat script on pear

Because tomcat is installed in a slightly different location on pear the old init script didn't work. RE has created a new script called "tomcat" in the init.d directory. Don't use the old jakarta-tomcat script anymore.


Permalink 05:03:08 pm, by mholmes, 64 words, 128 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 60

Jenkins3 enhanced, tested again, docs updated

SY from the TEI Council sent some comments and suggestions on the script, so I added two new features and fixed a bug, ran another full test, and updated the documentation on the TEI wiki. The changes were: checking that you're running Lucid; checking that nothing else is already running on port 8080; and removing some pointless chmods I was doing, that had no effect.


Permalink 04:24:29 pm, by mholmes, 106 words, 124 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 150

Jenkins3 tested again, script complete, instructions written...

Ran the whole process from script to completed builds again on Jenkins3 this morning, and all worked well, so I figure we're ready for the next stage. Wrote up the instructions and provided the script on the TEI Wiki, then set about preparing to build our proper VM for deployment. Initially we thought we'd do this using Remastersys to create an ISO for the full server install, but that failed -- the ISO wouldn't boot -- so we're going to take the same approach on our proper VM as is detailed in the wiki page. GN is arranging with sysadmin for the VM to be provisioned.

Permalink 09:13:22 am, by mholmes, 40 words, 120 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 40

Jenkins3: Changing default port

It looks like the place to change the default port is /etc/default/jenkins. Tried to change it to port 80, but it wasn't having any of it, so gave up. We can map 8080 from the VM through the AJP connector.


Permalink 07:14:08 am, by mholmes, 190 words, 290 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 60

Jenkins4: port forwarding on older VirtualBox

My home install of VirtualBox is older than my work one, and the instructions for port forwarding elsewhere in this blog don't work on it. This is how to port-forward Jenkins on an older VirtualBox.

  • Shut down both the VM and VirtualBox.
  • Open the XML file for the VM.
  • Add these lines to the <ExtraData> section:
          <ExtraDataItem name="VBoxInternal/Devices/e1000/0/LUN#0/Config/jenkins/GuestPort" value="8080"/>
          <ExtraDataItem name="VBoxInternal/Devices/e1000/0/LUN#0/Config/jenkins/HostPort" value="9494"/>
          <ExtraDataItem name="VBoxInternal/Devices/e1000/0/LUN#0/Config/jenkins/Protocol" value="TCP"/>
  • Restart VB and the VM. If you get this error:
    Configuration error: Failed to get the "MAC" value (VERR_CFGM_VALUE_NOT_FOUND).
    then you have the wrong value for the virtual network adapter (i.e. "e1000" above should be something else). Search the logs to find lines like this:
    00:00:01.104 [/Devices/e1000/0/Config/] (level 4)
    00:00:01.104   AdapterType    <integer> = 0x0000000000000000 (0)
    00:00:01.104   CableConnected <integer> = 0x0000000000000001 (1)
    00:00:01.104   LineSpeed      <integer> = 0x0000000000000000 (0)
    00:00:01.104   MAC            <bytes>   = "08 00 27 27 01 f8" (cb=6)
    which will tell you the correct name of the device.

Jenkins4 (my home VM) is now running builds... fingers crossed...


Permalink 02:35:49 pm, by mholmes, 90 words, 1786 views   English (CA)
Categories: Announcements; Mins. worked: 30

Jenkins3: Deja Vu fonts were missing!

After TEIP5-Documentation and TEIP5 failed to build again, I went in and looked at the logs more closely. I think the failure is caused by the absence of the Deja Vu fonts, which would of course be installed by default on a desktop but are not there on the headless server. I've installed them, and added them to the script, and I'm running TEIP5 again. There may be other common fonts missing, so we might have to go through this process a few times, but this is definitely progress.


Permalink 05:21:07 pm, by mholmes, 941 words, 226 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 120

Jenkins3 and Jenkins5: progress

Made some progress today after discovering that installing some XML tools before the Sun JDK caused the OpenJDK to get installed, so I switched the order of some items. Also figured out a better way of enabling the partner repo that will work with other distros than Lucid, and made a couple of other tweaks. The full script is below (since our backup server is down at the moment). I've also built Jenkins4 as a Natty 11.04 server, and I'm testing the script on that to see if it might work there too. It seems to be running fine, and I've got as far as a working Jenkins once before retreating to create a useful restore point and fix some bugs. It's quite plausible it might work perfectly well on any Ubuntu between Lucid and Natty.

#The Mighty Jenkins Builder Script.
#Note that this should be run as root (with sudo).
echo "Entering the Mighty Jenkins Builder Script."

uid=$(/usr/bin/id -u) && [ "$uid" = "0" ] ||
{ echo "This script must be run as root."; exit 1; }

echo "Running as root: good."

#First do updates.
echo "Doing system updates before starting on anything else."
apt-get update
apt-get upgrade

#Now add the repositories we want.
echo "Backing up repository list."
cp /etc/apt/sources.list /etc/apt/sources.list.bak

#Uncomment partner repos.
echo "Uncommenting partner repositories on sources list, so we can get Sun Java."
#Note: this is very crude, and also enables the CD-ROM source, which results in 
#errors. We need to make this more precise.
#sed -i -e "s/# deb/deb/g" /etc/apt/sources.list
#This is a better replacement:
sed -i -re '/partner/ s/^#//' /etc/apt/sources.list

#First Jenkins
echo "Adding Jenkins repository."
wget -q -O - | apt-key add -
echo "deb binary/" > /etc/apt/sources.list.d/jenkins.list

#Next TEI.
echo "Adding TEI Debian repository."
gpg --keyserver --recv-keys FEA4973F86A9A497
apt-key add ~/.gnupg/pubring.gpg
echo "deb ./" > /etc/apt/sources.list.d/tei.list

#Now we can start installing packages.
echo "Updating for new repositories."
apt-get update

echo "Installing Sun Java JDK."
apt-get install sun-java6-jdk &&
echo "Installing core packages we need."
apt-get install openssh-server libxml2 libxml2-utils devscripts xsltproc debhelper subversion trang &&

#TEI packages
echo "Installing TEI packages."
apt-get install psgml xmlstarlet debiandoc-sgml linuxdoc-sgml jing jing-trang-doc libjing-java rnv texlive-xetex &&
apt-get install trang-java tei-p5-doc tei-p5-database tei-p5-source tei-schema tei-emacs saxon nxml-mode-tei tei-p5-xsl tei-p5-xsl2 tei-roma onvdl tei-oxygen zip &&

#I don't believe the following step is necessary, so it's been commented out for the moment.
#Waiting for info from SR and SY about why it was in the instructions.
#echo "Removing things that cause problems for TEI."
#apt-get remove `apt-cache search gcj | grep gcj | awk '{print $1}'`

#Setting up configuration for oXygen
mkdir /root/.com.oxygenxml
chown jenkins /root/.com.oxygenxml
chmod a+x /root/.com.oxygenxml
mkdir /root/.java
chown jenkins /root/.java
chmod a+x /root/.java
echo "Don't forget to put your licensekey.txt file in the folder /usr/share/oxygen so that oXygen is registered."

#More packages needed
echo "Installing packages needed for building TEI source."

#Various fonts and the like.
echo "Installing fonts we need."
apt-get install msttcorefonts ttf-arphic-ukai ttf-arphic-uming ttf-baekmuk ttf-junicode ttf-kochi-gothic ttf-kochi-mincho
echo "The Han Nom font is not available in repositories, so we have to download it from SourceForge."
cd /usr/share/fonts/truetype
mkdir hannom
cd hannom
wget -O
find . -iname "*.ttf" | rename 's/\ /_/g'
fc-cache -f -v

apt-get install jenkins

#Configuration for Jenkins
echo "Starting configuration of Jenkins."
echo "Getting the Hudson log parsing rules from TEI SVN."
cd /var/lib/jenkins
svn export
chown jenkins hudson-log-parse-rules
echo "Getting all the job data from TEI SVN."
#Don't bring down the config.xml file for now; that contains security settings specific to 
#Sebastian's setup, and will prevent anyone from logging in. We leave the server unsecured,
#and make it up to the user to secure it.
#svn export
#chown jenkins config.xml
svn export --force jobs
chown -R jenkins jobs
echo "Installing Jenkins plugins."
cd plugins
wget --no-check-certificate
chown jenkins copyartifact.hpi
wget --no-check-certificate
chown jenkins emotiosudnal-hudson.hpi
wget --no-check-certificate
chown jenkins greenballs.hpi
wget --no-check-certificate
chown jenkins jobConfigHistory.hpi
wget --no-check-certificate
chown jenkins plot.hpi
wget --no-check-certificate
chown jenkins log-parser.hpi
wget --no-check-certificate
chown jenkins scp.hpi
wget --no-check-certificate
chown jenkins WebSVN2.hpi

echo "Restarting Jenkins server, so that it finds and initializes all the new plugins."
/etc/init.d/jenkins restart

echo "OK, we should be done. Now you have to:"
echo "1. Put your oXygen licence key in a file called licensekey.txt in the oXygen directory (/usr/share/oxygen/)."
echo "2. Go to the Jenkins interface on http://localhost:8080, and set up authentication. Read the Jenkins docs."
echo "That's it!"


Permalink 02:04:51 pm, by mholmes, 230 words, 147 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 30

Jenkins2 (now Jenkins4): things learned today

Went through the process a couple more times at home, and I have some questions still to be answered, and some additions to the script:

When it got to this line:

apt-get remove `apt-cache search gcj | grep gcj | awk '{print $1}'`

apt reported that it was uninstalling jing. I'd previously been puzzled about the absence of jing at the end of the script, because it's explicitly installed before this point, but it looks like this is the reason. I'm just wondering what that line is actually doing. I got it from the wiki:

and it looks like SY added that line in July last year:

I can add another line to my script to reinstall jing, so it's no problem, but I've written to ask him why he did that. Perhaps it's not necessary to remove all that stuff, because installing jing just brings a lot of it back anyway.

If we don't delete that stuff, the addition would be:

echo "Reinstalling jing, which just got removed by the previous line."
apt-get install jing

We can also restart Jenkins at the end of the script, so the user doesn't have to:

echo "Restarting Jenkins server, so that it finds and initializes all the new plugins."
/etc/init.d/jenkins restart


Permalink 02:25:58 pm, by mholmes, 9 words, 133 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 120

Jenkins2: (Jenkins3 actually)

The full install script now seems to be working.

Permalink 10:47:28 am, by mholmes, 90 words, 141 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 20

Found a source of nVivo problems

Sitting with K to debug an out-of-memory error with nVivo, I noticed that instead of closing nVivo to force it to write its file to disk, she was closing the VM. That would have the opposite effect: the file would not be written to the disk until the VM was restarted and nVivo, at some point, closed. That's a bit of a fragile situation, and we might want to look at a better approach to projects which depend on an app which doesn't actually save when it says it's saving.

Permalink 10:44:43 am, by mholmes, 83 words, 143 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 90

Latest builds: nearly working

Found some stuff had not got installed due to a couple of typos, and other things which used to be installed automatically as dependencies in the TEI Debian package now have to be explicitly installed. Updated the script. Finally got a successful job to run in Jenkins.

One remaining problem is kindlegen, which is proprietary and requires you to agree to terms before you can download it, so can't be installed from the command line. Still figuring out what to do about that.


Permalink 05:01:14 pm, by mholmes, 74 words, 122 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 120

Wrote a build script

Since we're now close to having a working Jenkins, I've written what we know into a build script, which I've been testing and tweaking. I've created a Jenkins3 server which is a plain Ubuntu server, fully updated, and I've set up a snapshot from which I can run the script. After several runs, it's pretty close to being complete and functional, but I haven't had a chance to test the Jenkins install itself yet.

Permalink 11:14:40 am, by mholmes, 170 words, 127 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 60

Jenkins2: Trying a fix for the Han Nom font

No word from SR on what to do about this rather odd font, so I'm trying a simple install of the downloadable package to see if that'll do it:

cd /usr/share/fonts/truetype
mkdir hannom
cd hannom
wget -O
find . -iname "*.ttf" | rename 's/\ /_/g'
fc-cache -f -v

Ran another build of P5-Documentation, but now it fails like this:

Wrote teip5.epub in /var/lib/jenkins/jobs/TEIP5-Documentation/workspace
mv teip5.epub Guidelines.epub
java -jar Utilities/epubcheck-1.1.jar Guidelines.epub
Epubcheck Version 1.1

ERROR: Guidelines.epub/OEBPS/ref-re.html(42): element "div" from namespace "" not allowed in this context

Check finished with warnings or errors!

make[1]: *** [epub.stamp] Error 1
make[1]: Leaving directory `/var/lib/jenkins/jobs/TEIP5-Documentation/workspace'
make: *** [dist-doc.stamp] Error 2
Archiving artifacts
Permalink 08:01:38 am, by mholmes, 151 words, 145 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 90

Jenkins2: oXygen prefs and .java directory; and Stylesheets now working!

The Jenkins process runs as the jenkins user, but when oXygen is invoked, it tries to read/write from /root/.com.oxygenxml, and since that directory is owned by root, it fails. I've chowned it to jenkins:root (jenkins appears to have no group), and I'm trying again. I've also created /root/.java, since oXygen seems to need that, and made it readable by the jenkins user.

I also discovered, from the remaining errors in the Stylesheets task, that there is no symbolic link trail for the Java jar executable. This is probably the result of my installing the Sun JDK manually instead of from the repos. So I did this, following the analogy of the java symbolic link:

sudo ln -s /usr/local/java/jdk1.6.0_25/bin/jar jar
sudo ln -s /etc/alternatives/jar jar

And now the Stylesheets task completes!

Now there's just the blasted Han Nom font issue.


Permalink 03:26:59 pm, by mholmes, 121 words, 1888 views   English (CA)
Categories: Announcements; Mins. worked: 90

Fixes for 32-bit JRE

SR is rebuilding the TEI deb now, presumably with an oXygen that has no JRE with it. I've removed the JRE in my Jenkins oXygen, and also placed a key file in the oXygen directory (which I hope will prevent it from trying to read /root/.com.oxygenxml to find the key). Running the file at the command line now does not fail -- well, it does, but with an error about number of arguments required -- so I ran a test build of it through Jenkins to see if the actual thing worked, and the build did start successfully, but failed with what looks like an XSLT problem during a test. Waiting for feedback from SR about that.


Permalink 03:43:40 pm, by mholmes, 213 words, 137 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 120

Jenkins2: problem is 32-bit JRE

From help I got from the oXygen forum, I've determined that the problem is that the JRE in the oXygen installation is 32-bit.

It was the teideb package repo that presumably gave me my oxygen install (I didn't do anything else to get oXygen installed). The package would seem to be: tei-oxygen_12.1-tei1_all.deb from here: which contains a complete oXygen package (in data.tar.gz) which does indeed include a JRE.

I just moved the JRE elsewhere, so now it's presumably using the installed Sun Java. But I'm getting another error now:

    using oXygen XML Editor stylesheet documentation generator.
    Exception in thread "main" java.lang.ExceptionInInitializerError
    at ro.sync.xsl.documentation.XSLStylesheetDocumentationGenerator.main(Unknown Source)
    Caused by: java.lang.RuntimeException: The preferences directory: /root/.com.oxygenxml
    cannot be accessed !/n Please check the read/write access on that folder and its ancestors !

There is no such directory, of course. The Jenkins process that's initiating the stylesheet script file is apparently running as root, so perhaps that's why it's looking there for preferences, but this is a headless server that doesn't need any GUI layout information. What would it be looking for in the preferences directory? I've posted for help on the oXygen forum.


Permalink 08:23:47 am, by mholmes, 143 words, 2036 views   English (CA)
Categories: R & D, Activity log, Notes; Mins. worked: 45

Configuration change, and permissions issue

Looking at the Jenkins interface this morning, I realized that I'd forgotten to give the anonymous user read access to jobs, so I've done that. I also tracked down the issue with the stylesheets to (I think) a permissions problem with the oXygen installation; it was installed via sudo, so all the files ended up owned by root, whereas the job is running under a different user. I've chowned everything to hcmc:hcmc, and we'll see if that solves it, but it might have to be chowned to whatever user Jenkins runs under. Running a stylesheet build now to see if it works.

EDIT: But it doesn't. The same error shows up even when java is world r-x. Posted on the oXygen forum for a possible solution, and also heard from SR that he remembers this but doesn't remember how he fixed it.


Permalink 11:38:53 am, by mholmes, 73 words, 1987 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 45

Jenkins2: Build problems are from missing packages

Looked at the logs for the first failed build, and found this:

(cd debian-tei-roma; debclean; debuild -i.svn -I.svn -uc -us)
/bin/sh: debclean: not found
/bin/sh: debuild: not found

These are in the package devscripts, so sudo apt-get install devscripts. I'll try the build again.

Looking at the stylesheet failure, we're missing xsltproc, so sudo apt-get install xsltproc. Then trying again, noticed I need debhelper, so sudo apt-get install debhelper.

Permalink 11:04:52 am, by mholmes, 102 words, 1928 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 60

Jenkins2: Copying Jenkins job setup from SVN

The Jenkins configuration data is stored in /var/lib/jenkins; the config data for Jenkins1 is stored on the SVN server at I imported the configurations from there, and I have builds going, but builds were failing at the end because the Log Parser plugin didn't seem to have any rules. SR let me know that the rules were in $SF/trunk/P5/Utilities/hudson-log-parse-rules, and I figured out from the configs that they needed to be in /var/lib/jenkins, so I copied them there. Trying a stylesheet build again...

Permalink 09:29:44 am, by mholmes, 113 words, 1924 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 60

Security setup and plugins on Jenkins2

For the security setup, I chose the "Standard security setup" which uses Jenkins's own database and matrix-based permissions. I let anonymous have read only, and created a tei user with full permissions.

I'm now installing plugins, based on the instructions in TEI SF. The following plugins were there by default:

  • CVS Plugin
  • Maven Integration plugin
  • SSH Slaves plugin
  • Jenkins Subversion Plug-in

I had to install these manually:

  • Emotional Hudson Plugin
  • Green Balls
  • Copy Artifact Plugin
  • JobConfigHistory Plugin (identified in TEI doc as "Hudson Job Configuration History Plugin")
  • Plot Plugin
  • Log Parser Plugin
  • SCP plugin (identified in TEI doc as "Hudson SCP publisher plugin")
  • WebSVN2 Plugin (identified in TEI doc as "Hudson WebSVN2 Plugin")


Permalink 01:32:58 pm, by mholmes, 1366 words, 258 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 300

Building Jenkins2

Started my first attempt at building Jenkins2 with my desktop VirtualBox. Steps:

  • Created a new Ubuntu 64-bit VM with 2GB of RAM and an expanding 25GB HD.
  • Booted and installed from a downloaded 64-bit PC (AMD64) server install CD ISO.
  • Let the installer determine the partitioning (it used LVM and created three partitions).
  • ERRONEOUSLY chose the "Tomcat Java Server" option to start with. Note that this is not necessary; in fact Tomcat gets in the way, and I had to uninstall it later.
  • Rebooted after install, did updates, and then installed curl.
  • Downloaded and installed the Oracle JDK (downloaded it locally, put it on a server, and curled it to the VM). NOTE: There are Sun Java packages in the repos, I think; it would probably be better to install them, but I couldn't quite figure out how and I was in a rush. I think I could have done sudo apt-get install sun-java6-jre and sudo apt-get install sun-java6-jdk.
  • Created /usr/local/java, copied the JDK binary there, made it executable and executed it.
  • Added the following to /etc/profile:
    • JAVA_HOME=/usr/local/java/jdk1.6.0_25
    • PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
    • export JAVA_HOME
    • export PATH
  • Configured Java through update-alternatives:
    • sudo update-alternatives --install "/usr/bin/java" "java" "/usr/local/java/jre1.6.0_25/bin/java" 1
    • sudo update-alternatives --set java /usr/local/java/jre1.6.0_25/bin/java
  • Rebooted and tested with java -version that the JDK is now default the Java VM.
  • Tried to do waiting updates with apt-get, but they were held back, so did them with aptitude. Why is this necessary?
  • Couldn't find Tomcat, so did sudo apt-get install tomcat6, and installed it. This seemed to update the openjdk too, but it didn't mess with the configuration of the Oracle Java.
  • After reading around a bit, I decided I'd rather use Tomcat7, so I created /usr/local/apache-tomcat-7.0.12, and used wget to download the tar.gz file, and then did tar xvzf to unwrap it.
  • Added my usual
  • Ran Tomcat and confirmed it was working by curling

Next I wanted to find a convenient way to access the VM's Tomcat from the host, so I ran this in a terminal on the host, after shutting down the vm:

VBoxManage modifyvm "Jenkins2" --natpf1 "tomcat,tcp,,9090,,8080"

This should set up port forwarding between the host desktop port 9090 and the Tomcat running on port 8080 in the VM.

The port-forwarding worked like a charm, but in the process I discovered that Tomcat 6 is set to run as a service. So I have Tomcat 7 available, but I'll stick with Tomcat 6 from the repos for the moment, unless I find a reason to abandon it.

NOTE: All of the Tomcat stuff was unnecessary, but the Sun/Oracle Java is required.

  • Added the Jenkins key and repo as explained here, and installed Jenkins.
  • Having done that, I realized that it's actually starting by itself on port 8080; it doesn't need to run inside Tomcat. So I edited its script in /etc/init.d/jenkins to add this to the arguments: --httpPort=8081. This should make it start on 8081 and not conflict with Tomcat.
  • Then I shut down the VM and ran this on the host:
    VBoxManage modifyvm "Jenkins2" --natpf1 "jenkins,tcp,,9091,,8081"
    which should let me see Jenkins on 9091 on the host, without having to turn off Tomcat for the moment. However, in future, we should probably just leave Tomcat out of it.
  • Despite my re-configuration, Jenkins kept trying to start on 8080. Meanwhile, I got sick of the limited terminal available in the headless vm, so I did sudo apt-get install openssh, and tested it, then shut down the VM and ran this:
    VBoxManage modifyvm "Jenkins2" --natpf1 "guestssh,tcp,,2222,,22"
    Now I can connect to the VM using:
    ssh -l hcmc -p 2222 localhost
    which means I can easily copy/paste from browsers, use a larger terminal, etc.
  • So then I went back and got rid of Tomcat (sudo apt-get remove --purge tomcat6), and then copied my backup of the /etc/init.d/jenkins back over the edited one to remove the attempt to reconfigure the httpPort. Then a restart gave me.... A WORKING JENKINS on port 8080.

Next, install texlive and friends:

Texlive installed. Then I had to do sudo apt-get install subversion, because I'll be needing that.

Next, I followed my own instructions to get TEI installed.

Then in trunk/P5 I did make dependencies, and then sudo apt-get install [the list of dependencies].

However, an attempt to build P5 failed:

Checking you have running XML tools and Perl before trying to run transform...
xmllint:make: *** [check.stamp] Error 1

Turns out xmllint is in libxml2-utils, so had to sudo apt-get install libxml2-utils. Now a make is proceeding as expected.

Next, I wanted to do some cleanup -- on login, there was a message to the effect that there were packages available for upgrade, but the message was wrong (see discussion here). So I did sudo rm /etc/motd.tail.

Next, I tried make pdf, which is one of the targets that failed for me on my desktop due to the absence of TeXLive 2010. It failed again for the same reason; although I've installed TeXLive 2010, the system has at least one symbolic link (from /usr/bin/xelatex to /usr/bin/xetex) which are from the old Debian 2009 Tex package.

So the instruction on the TEI wiki page for installing packages, which includes texlive, is wrong; that shouldn't be installed. I've now removed it (sudo apt-get remove texlive). Then I added the install location of the 2010 to my path, by doing sudo nano /etc/profile, and adding PATH=$PATH:/usr/local/texlive/2010/bin/x86_64-linux to it.

Then I tried make pdf again, and got the same error, so it's not caused by the texlive version after all. Although the error message suggests that the error is caused by a space in a font name, I think it's actually caused by the absence of the font. Reported this to SR, who modified the source so that the Minion Pro and Myriad Pro fonts are no longer required (they're proprietary anyway). Now I can build a PDF, but I'm still getting an error at the end.

Looking in job$.log (after deleting it, and re-running the make pdf to be sure it's fresh), I see that it still starts with this:

hcmc@Jenkins2:~/tei/trunk/P5$ more job\$.log
kpathsea: Invalid fontname `Myriad Pro', contains ' '
kpathsea: Invalid fontname `Myriad Pro', contains ' '
kpathsea: Invalid fontname `Myriad Pro', contains ' '
kpathsea: Invalid fontname `Myriad Pro', contains ' '
kpathsea: Invalid fontname `Myriad Pro/B', contains ' '
kpathsea: Invalid fontname `Myriad Pro', contains ' '
kpathsea: Invalid fontname `Myriad Pro/I', contains ' '
kpathsea: Invalid fontname `Myriad Pro', contains ' '
kpathsea: Invalid fontname `Myriad Pro/BI', contains ' '
kpathsea: Invalid fontname `Myriad Pro:', contains ' '

The rest of the file consists mainly of

** WARNING ** Failed to convert input string to UTF16...

but there are two more Myriad Pro errors towards the end, and finally there's this:

kpathsea: Invalid fontname `HAN NOM A', contains ' '
kpathsea: Invalid fontname `HAN NOM A', contains ' '
kpathsea: Invalid fontname `HAN NOM A', contains ' '
kpathsea: Invalid fontname `HAN NOM A/B', contains ' '
kpathsea: Invalid fontname `HAN NOM A', contains ' '
kpathsea: Invalid fontname `HAN NOM A/I', contains ' '
kpathsea: Invalid fontname `HAN NOM A', contains ' '
kpathsea: Invalid fontname `HAN NOM A/BI', contains ' '
kpathsea: Invalid fontname `HAN NOM A:', contains ' '

with lots more instances of the same thing. So I suspect there are still references to Myriad Pro in the code, and there's also this other font, which I've never heard of. It doesn't seem to be available in the repos either, as far as I can tell (there's no ttf-hannom, although Arch Linux seems to have one). Waiting to hear from SR where he got it from.


Permalink 05:14:24 pm, by mholmes, 176 words, 271 views   English (CA)
Categories: R & D, Activity log; Mins. worked: 60

Building eXist trunk and deploying locally

After JN had to check out the current source and build to get around a recent bug, I took the opportunity to go through the same process, create an eXist war file:

  • mkdir exist_trunk
  • cd exist_trunk
  • svn co .
  • ./ clean
  • ./ download-additional-jars
  • cd extensions
  • cp
  • [Edit to turn on FOP.]
  • cd ../
  • ./
  • ./ -f build/scripts/jarsigner.xml
  • ./ dist-war
  • mkdir [tomcat]/webapps/exist
  • cp dist/exist-1.5....war [tomcat]/webapps/exist
  • jar xfv [tomcat]/webapps/exist/exist-1.5...war

Then I started Tomcat 6.0.26, but eXist failed to start. So I installed the latest Tomcat (7.0.12). It took me a while to get this going, but I eventually figured out that all the .sh and all the .jar files in the bin directory need to be executable. I used my usual file to start it up, and exist came up working fine. So the latest eXist seems to prefer the latest Tomcat.


Permalink 02:10:12 pm, by mholmes, 193 words, 115 views   English (CA)
Categories: R & D, Activity log; Mins. worked: 30

TEI build now working

SR has removed some build targets and components from the makefile, which means I can now build P5 without errors, but some of the issues will need to be handled at some point -- for instance, the error in building the PDF is due to the need for TeX Live 2010, whereas current Ubuntu versions are stuck at 2009. Still working on a detailed plan for creating a clone Jenkins machine, for which we'll need to solve all these problems. Here's some bits from email for the record (our suggested plan and SR's responses):

 -Start with an Ubuntu 10.04 headless server install.
   -Install the Oracle JDK.
   -Install Tomcat (any particular version? Latest? What's in repos?).
doesnt matter, version 6 or 7 latest

   -Install Jenkins from here:
   -Install Tex Live 2010 from here:
   -Follow these instructions to get a working build setup:
Jenkins will do its own checkout, so no need for some of that


  I'm pretty sure we'll also have to install some extras (fonts and the 
note the result of 

    make dependencies


Permalink 04:51:01 pm, by mholmes, 136 words, 320 views   English (CA)
Categories: Activity log, Documentation; Mins. worked: 60

NVivo problem

At the end of the day, NVivo froze while shutting down. This meant that none of the day's changes were written to disk, because apparently it stores them up and writes them at the end of the session. We found a .tmp file in docs and settings -- can't remember the exact path, but it was named like the actual data file -- and this turned out to be a complete copy of the altered data, with the exception that the dates of recent changes had not been recorded. Backed up the original day-old file as "...last known good", then copied the tmp file over to the original filename, and everything should be good to go, but when M comes in next, she should have a good look to make sure nothing else has been lost.


Permalink 02:39:29 pm, by jamie, 149 words, 251 views   English (CA)
Categories: Notes; Mins. worked: 0

MySQL requests no longer sent to

Update from Stewart 20120622: the dbadmin says that the originating request is supposed to go through, who forwards it on to

  • Your affiliation with UVic (Faculty/Staff, etc)
  • Your department
  • Which netlink Id is associated with this database
  • The database name (must be of the form netlink_whatever)
  • Whether or not you will be using drupal for this instance.

As per my correspondence with, all MySQL-related support requests/questions should be sent to rather than

Hi Jamie,
The unix sysadmin no longer handles MySQL requests, these should be set
to, if you resend this email to that address it will
find it's way into the correct queue of our ticketing systems

Not sure whether he actually means only MySQL, or whether PostgreSQL falls under this new support email as well.


Permalink 09:05:00 am, by Greg, 123 words, 1291 views   English (CA)
Categories: Activity log; Mins. worked: 10

Apt upgrade problem and solution (missing final newline)

The other day I had a problem upgrading my machine. No matter what I tried I continually got an error like this:
files list file for package 'libvpx0' is missing final newline

I left it alone, figuring that it was a package-related problem, but it persisted for a few days. A bit of Googling turned up this page: which provides the answer.

It turns out I had two corrupted list files: /var/lib/dpkg/info/libvpx0.list and /var/lib/dpkg/info/libtiff4.list. I removed the list files and ran apt-get upgrade again and everything happily upgraded.

So, "missing newline" problem is solved by removing the offending .list file from /var/lib/dpkg/info


Permalink 10:35:03 am, by Greg, 287 words, 259 views   English (CA)
Categories: Labs, Activity log; Mins. worked: 15

Semi-useful completely fun

Looking for an audible notification for lab machines with no speakers because I want to be able to run stuff and hear a response from across the room (perhaps it's lazy, but it's also convenient...).

It turns out that the pcspeaker is blacklisted in Ubuntu, so I had to turn it back on, like this:

sed -i 's/blacklist pcspkr/#blacklist pcspkr/' /etc/modprobe.d/blacklist.conf && modprobe pcspkr

The "beep" package will allow you to make some noise.: apt-get install beep

To test, run this command to play Funkytown:

beep -f 38 -l 3000 -n -f 392 -l 100 -n -f 38 -l 100 -n -f 392 -l 100 -n -f 38 -l 100 -n -f 349 -l 100 -n -f 38 -l 100 -n -f 392 -l 300 -n -f 38 -l 100 -n -f 294 -l 300 -n -f 38 -l 100 -n -f 294 -l 100 -n -f 38 -l 100 -n -f 392 -l 100 -n -f 38 -l 100 -n -f 523 -l 100 -n -f 38 -l 100 -n -f 494 -l 100 -n -f 38 -l 100 -n -f 392 -l 200

Or this one to play the theme from Beverly Hills Cop:

beep -f 659 -l 460 -n -f 784 -l 340 -n -f 659 -l 230 -n -f 659 -l 110 -n -f 880 -l 230 -n -f 659 -l 230 -n -f 587 -l 230 -n -f 659 -l 460 -n -f 988 -l 340 -n -f 659 -l 230 -n -f 659 -l 110 -n -f 1047 -l 230 -n -f 988 -l 230 -n -f 784 -l 230 -n -f 659 -l 230 -n -f 988 -l 230 -n -f 1318 -l 230 -n -f 659 -l 110 -n -f 587 -l 230 -n -f 587 -l 110 -n -f 494 -l 230 -n -f 740 -l 230 -n -f 659 -l 700

Or the Pink Panther theme:

beep -f 622.22 -l 200 -n -f 659.26 -l 300 -D 100 -n -f 738 -l 200 -n -f 784 -l 300 -D 100 -n -f 1046.50 -l 200 -n -f 993.60 -l 300 -D 100 -n -f 659.26 -l 200 -n -f 784 -l 300 -D 100 -n -f 993.60 -l 200 -n -f 936.8 -l 600 -n -f 880 -l 200 -n -f 784 -l 200 -n -f 659.26 -l 100 -n -f 587.33 -l 100 -n -f 659.26 -l 400


Permalink 04:16:45 pm, by Greg, 847 words, 380 views   English (CA)
Categories: R & D, Activity log, Activity log, Documentation; Mins. worked: 300

Migraine-inducing bash one-liner

Background: lab machines are DHCP, therefore the IP address cannot be counted on, nor can the canonical name be set in DNS (apparently). rSnapshot requires a canonical something (obviously) so it know where to go to grab the snapshot.
Solution: put each client machine in the hosts file so rSnapshot knows where each machine is. This of creates a
Problem: if the client machine's IP address changes, the hosts file on the rSnapshot server needs to be adjusted.

I looked at using nmap/arp to do this job, but came to the conclusion that the results were unreliable. Occasionally, a machine would not report back to the request, making hte script unreliable. For posterity, this is how the script worked:
1) set var="MAC address" for each machine (e.g. machineName="00:DE:6F:34:34:00")
2) set ipArr=( $(nmap -sP|grep -oh '[0-9]\{3\}\.[0-9]\{3\}\.[0-9]\{3\}\.[0-9]\{3\}'| tr '\n' ' ') ) - which will output an array that looks something like this:
3) iterate through the array and run arp on each IP:
for (( i = 0 ; i < ${#active_IP_addresses[@]} ; i++ ))
echo `arp -a ${active_IP_addresses[$i]}`
which will produce output that looks like this: (192.168.16.X) at XX:XX:XX:XX:XX:XX [ether] on eth0
ad nauseum

Once you get all that you can wrangle it in to a hosts file for the rSnapshot machine. As I said, though, sometimes it won't be complete - and I have no idea why; it remains a more elegant solution if it can be made more reliable. I say this because it would all run from the server. The solution I'm currently working on is a local/remote combo that I just don't like.

So the solution I have right now is to enter each machine in to the rSnapshot config file and set a cron job to grab a snapshot (for this example let's say that the daily snapshot will happen at 4pm).
On each client machine, set a cron job that sets the hostname and current IP address and sends it to the rSnapshot server. Here's how I'm doing this:
1) Set up SSH-keys on the client so it can send stuff to the server on its own.
2) Run this doozy on the client end to add *this* client's info to the rSnapshot server's /etc/hosts file, removing old entries for *this* machine along the way:
rhost=$(h=$(echo -e $HOSTNAME);echo $h `ifconfig eth0|grep 'inet addr:'|cut -d ":" -f2|sed 's/ Bcast//gI'`) ; ssh "sed -i /\"$h\"/d /etc/hosts;echo $rhost >> /etc/hosts"

Right now I'm working on getting it running from cron but I'm having some trouble with it. When I just dump the above in to a bash script and run it from the command line (or from cron)
I get an error: "sed: -e expression #1, char 2: unterminated address regex" which I do not currently understand.

However, here's how it works:
1) echo -e $HOSTNAME returns the name you gave the machine
2) `ifconfig eth0|grep 'inet addr:'|cut -d ":" -f2|sed 's/ Bcast//gI'` gets you the IP address for the machine by asking ifconfig for all the info it has on eth0 (the NIC), chopping it up and discarding extraneous stuff (the *cut* bit returns the second field after the colon, while the *sed* bit deletes the stuff after the IP address - in this case that's " Bcast").
3) wrap this up in a pair of variables (h being the hostname and rhost being the hostname IP address combination). This is the first part of the oneliner: rhost=$(h=$()).
4) once we have values for host name and IP address (h and rhost) we head over to the remote machine via ssh and send the remote machine (our rSnapshot server) a couple of commands. We do this by adding a quoted command to the end of the ssh invocation. We do this with double quotes; double-quotes will expand variables, single-quotes won't and we have variables.
5) the commands on the rSnapshot side go like this:
sed -i /\"$h\"/d /etc/hosts (which removes any lines, using the d command, which include our host name - this gets rid of old entries)
echo $rhost >> /etc/hosts (which adds an entry for the client to the hosts file)

Things I need to do to make this really useable:
1) make it work from inside a script that cron can successfully run (see above for the problem)
2) adjust the remote sed command to only remove lines that begin with hostname. Right now, this will remove *any* line that includes the hostname at all. For example, consider a file like this:


If I want to edit the line for the machine called tron and I use my sed command up above I'll end up with an empty file because each machine includes the string "tron". I don't name machines in a way that this would be a problem, but it's a pretty big problem if I inadvertently do something stupid when naming a new machine.


Permalink 04:12:23 pm, by Greg, 259 words, 100 views   English (CA)
Categories: Servers, R & D, Activity log; Mins. worked: 500

Automated backups for lab machines

Huge progress today.
I have a virtual rsnapshot environment working that automatically backs up a remote machine once per day. here's how:
1) install rsnapshot from the repos
2) configure it by editing the conf file at /etc/rsnapshot.conf
* make sure to edit "snapshot_root"
* uncomment "cmd_cp"
* uncomment "cmd_ssh"
* edit intervals to something useful, like "interval daily 5" - commenting out intervals not being used. This setting takes 5 snapshots per week.
* set log file location
* enable lockfile (rsnapshot should run as root)
* set includes and excludes if needed - I set up an excludes file
* set backup points - one for each machine: e.g. "backup user@machinename/ip:/home/netlink/ machinename/"

3) Prime the pot by manually running a backup: sudo rsnapshot daily
4) Then, set a crontab for running things every weekday at 6pm:
crontab -e
0 18 * * 1,2,3,4,5 /usr/bin/rsnapshot daily
5) Wait until 6pm

I've yet to test this on a real server/machine, but I'll set up Titchy as an rsnapshot server to take the next step.
Also to do: adjust rollover so that stale snapshots don't just get deleted, the latest version is always archived somewhere. Is this rdiff?
NOTE: rsnapshot handles the above issue on its own.

NOTE: when rsnapshot runs from the collector end (i.e. pulling *from* a client) it invokes a process on the client end that shows up as "rsync --server --sender -logDtprRe.iLsf --numeric-ids . /home/gregster/Desktop" where "/home/gregster/Desktop" is the path to the stuff getting backed up. This will be useful when locking down legal processes using single-use ssh-keys.


Permalink 04:03:24 pm, by Greg, 106 words, 191 views   English (CA)
Categories: Labs, Activity log; Mins. worked: 300

Update to LDAP problem

Previous post update: the problem appears to be a bug in the pam code itself which allows this kind of behaviour in some cases. The developer thinks it's possible that our system is allowing clients to anonymously bind. Fortunately there's a patch for that. Unfortunately recompiling PAM is close to being the last thing I want to do. Fortunately, there's a new version (v0.7.13 vs. the current v0.7.6) from Natty that installed with no drama. Also fortunately, it works!
I've installed it on carrot to test it in the real world.
I also spent a good deal of time submitting a bug report on Launchpad (Bug #720401).


Permalink 03:36:59 pm, by jamie, 81 words, 102 views   English (CA)
Categories: Activity log; Mins. worked: 15

English Grad Apps program fix

Small fix to the english grad apps program that allows reviewers to review PhD and MA applications online. The names of applicants are passed to a Javascript function that is triggered upon button click, like this (names used are not real):

<button onclick="showform('LASTNAME-Firstname-PhD')">Assess...</button>

However, if the name had an apostrophe, it was breaking the Javascript:

<button onclick="showform('LAST'NAME-Firstname-PhD')">Assess...</button>

So, I simply escaped the names with addslashes().


Permalink 12:26:01 pm, by mholmes, 187 words, 156 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 90

Testing of Wine + Image Markup Tool on the Mac

After discovering the excellent WineBottler package of Wine for the Mac, and testing it with one of my personal project (Markin) at home, I was hopeful that it might provide a means to run the Image Markup Tool on the Mac. GN and I thoroughly tested a variety of solutions: WineBottler, PlayOnMac, and Wineskin. All did more or less the same thing: install Wine (and sometimes XQuartz as well), and build a customized runtime environment (a "prefix") for the application, bundling it into a .app package. Everything worked well with the exception of the graphics handling. Opening a normal Mariage-sized file took several minutes, during which the app was unresponsive. This is in contraxt to IMT behaviour on Linux Wine, where everything works as normal (if anything, a bit snappier than Windows). Our conclusion is that the mapping of Windows graphics calls to the OSX graphics system is not as sophisticated as it is on Linux, so the fairly intensive operations the IMT does to resize and zoom images, which use the Graphics32 library, run extremely slowly. Maybe this will solve itself. We tested with Wine 1.1.4 and 1.2.2.


Permalink 11:05:50 am, by Greg, 39 words, 227 views   English (CA)
Categories: Notes; Mins. worked: 0

Bash tutorials

Found some excellent Bash tutorials here:

The list points to two of the best - which I've found to be very good:


Permalink 03:45:46 pm, by mholmes, 72 words, 106 views   English (CA)
Categories: R & D, Activity log; Mins. worked: 10

Warning in Tomcat-stable on Pear

ER passed on some error messages from Tomcat-stable:

2011-01-25T13:40:03-08:00 daemon.err tomcat: 
[ajp-8009-4]: Use of deprecated, since 2008-04-02, function 
fn:item-at(). It will be removed really soon. Please use
$x[1] instead.

It's not clear which project they're from, but item-at is used in xrequest.xq in Graves (line 365). I'll have to test replacing it.


Permalink 12:55:05 pm, by Greg, 37 words, 172 views   English (CA)
Categories: Labs, Activity log; Mins. worked: 120

Lab rebuilds

I've finally gotten around to rebuilding onion and carrot with my latest script. No log in problems yet, but OOo may still be flaky. One user has had problem with insufficient user rights when saving documents. Investigating.

Permalink 12:46:22 pm, by Greg, 99 words, 172 views   English (CA)
Categories: R & D, Activity log, Activity log; Mins. worked: 120

Lab build script

I'm working on an updated version that includes stuff like setting up a printer by default, providing a choice of lab or developer build (extra apps, no ldap login).
I've got the printer going by creating a /etc/cups/printers.conf file with info on the printer (make, model etc.) and putting a pre-made PPD at /etc/cups/ppd/HCMC.ppd. Finally, printers.conf needs to be chgrp'd to group "lp" and the cups service needs to be restarted.
I'm now working on making it a bit more informative (echos out what is about to happen, provides choices etc.)


Permalink 01:06:58 pm, by Greg, 939 words, 228 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 15

Set up two mice on login

ANOTHER UPDATE: bizarre! When I first power-up and log in the script either doesn't fire or just doesn't work. If I log out and back in - it does! WTF?

UPDATE: this is an annoyingly difficult thing to nail down, but I think I've got it now.
I wrote a script called mice and put it in my path at: /usr/local/bin/mice
Then, I add it to my startup applications pref panel like this:
sh -c '/usr/local/bin/mice'
And it seems to work.
Here's the script I wrote

# get ID and current set up for each input device
emID=`xinput list|grep 'Kensington Kensington Expert Mouse'|cut -d "[" -f1|grep -o [0-9].`
stID=`xinput list|grep 'Kensington Kensington Slimblade Trackball'|cut -d "[" -f1|grep -o [0-9].`
emButtonMap=`xinput --get-button-map 'Kensington Kensington Expert Mouse'`
stButtonMap=`xinput --get-button-map 'Kensington Kensington Slimblade Trackball'`
emAccel=`xinput --list-props 'Kensington Kensington Expert Mouse'|grep 'Constant'|tr "\t" "|"|cut -d "|" -f3`
stAccel=`xinput --list-props 'Kensington Kensington Slimblade Trackball'|grep 'Constant'|tr "\t" "|"|cut -d "|" -f3`
emBMap="3 2 1 5 4 6 7 8 9 10 11 12"
stBMap="1 8 3 4 5 6 7 2 9 10 11 12"

# check current settings against desired setup and change if necessary
if [ "$emButtonMap" != "$emBMap" ]; then
xinput --set-button-map $emID $emBMap
emBmapCurr=`xinput --get-button-map 'Kensington Kensington Expert Mouse'`
tsEm=`date +"%Y-%m-%d %H:%M:%S"`
echo "[$tsEm] set Expert Mouse (device ID: $emID) button map to $emBMapCurr">> /home/gregster/.hcmc/mice.log

if [ "$stButtonMap" != "$stBMap" ]; then
xinput --set-button-map $stID $stBMap
stBMapCurr=`xinput --get-button-map 'Kensington Kensington Slimblade Trackball'`
tsSt=`date +"%Y-%m-%d %H:%M:%S"`
echo "[$tsSt] set Slimblade Trackball (device ID: $stID) button map to $stBMapCurr" >> /home/gregster/.hcmc/mice.log

if [ "$emAccel" != "$emDACD" ]; then
xinput --set-prop $emID 'Device Accel Constant Deceleration' 3
emDACDCurr=`xinput --list-props 'Kensington Kensington Expert Mouse'|grep 'Constant'|tr "\t" "|"|cut -d "|" -f3`
tsEmAc=`date +"%Y-%m-%d %H:%M:%S"`
echo "[$tsEmAc] set Expert Mouse (device ID: $emID) acceleration to $emDACDCurr" >> /home/gregster/.hcmc/mice.log

if [ "$stAccel" != "$stDACD" ]; then
xinput --set-prop $stID 'Device Accel Constant Deceleration' 3
stDACDCurr=`xinput --list-props 'Kensington Kensington Slimblade Trackball'|grep 'Constant'|tr "\t" "|"|cut -d "|" -f3`
tsStAc=`date +"%Y-%m-%d %H:%M:%S"`
echo "[$tsStAc] set Slimblade Trackball (device ID: $stID) acceleration to $stDACDCurr" >> /home/gregster/.hcmc/mice.log



In order to get my two trackballs working as I like (one left-handed, one right-handed) I tried a few ways but ended up going with adding instructions to the System > Preferences > Startup applications applet. (BTW, this adds .desktop files to ~/.config/autostart/ directory).

The trick is to get your syntax just right. Running something at the terminal is not simply duplicated in the Startup applications applet. I ended up creating one startup app for each instruction (doing many in one eluded me). This is what I put in the "command" field for each one (that is, each of the following lines is the meat of one startup app):
sh -c "xinput --set-prop 'Kensington Kensington Expert Mouse' 'Device Accel Constant Deceleration' 4"
sh -c 'xinput --set-button-map 'Kensington Kensington Expert Mouse' 3 2 1 5 4 6 7 8'
sh -c "xinput --set-prop 'Kensington Kensington Slimblade Trackball' 'Device Accel Constant Deceleration' 4"
sh -c 'xinput --set-button-map 'Kensington Kensington Slimblade Trackball' 1 8 3 4 5 6 7 2'

You have to use an invocation like sh to get it to run, and sh needs the -c flag to know that it's operating the stuff after it outside of the context of standard input (ie. running it directly from the CLI). Also, strings with spaces (e.g. Device Accel Constant Deceleration) need to be enclosed in SINGLE-QUOTES. Standard practice seems to put the entire command following sh -c in single-quotes, but in this case I use double-quotes to avoid painful escaping.

So, one of my .desktop files looks like this:

[Desktop Entry]
Exec=sh -c "xinput --set-prop 'Kensington Kensington Expert Mouse' 'Device Accel Constant Deceleration' 4"
Name[en_CA]=Add accel for ExpertMouse
Name=Add trackball configs
Comment[en_CA]=accel for ExpertMouse
Comment=eccel for ExpertMouse

NOTE: unfortunately, this does not work all the time. Re-investigating.

UPDATE: this is ridiculously hard. It looks like even if I put the commands in my .profile something else resets the button-mapping. I'm going to try udev again. Of course, the problem I see with this actually working is that it's a system level adjustment, and it *should* be per-user.

Anyway, here's the way I'm proceeding:

1) Gather info on the device by running this:
udevadm info -a -p $(udevadm info -q path -n /dev/input/by-id/usb-Kensington_Kensington_Slimblade_Trackball-mouse)
2) Unplugging/plugging a mouse while tailing the Xorg log gets this output:
[ 28709.701] (II) config/udev: removing device Kensington Kensington Expert Mouse
[ 28709.702] (II) Kensington Kensington Expert Mouse: Close
[ 28709.702] (II) UnloadModule: "evdev"
[ 28717.858] (II) config/udev: Adding input device Kensington Kensington Expert Mouse (/dev/input/mouse0)
[ 28717.859] (II) No input driver/identifier specified (ignoring)
[ 28717.859] (II) config/udev: Adding input device Kensington Kensington Expert Mouse (/dev/input/event2)
[ 28717.859] (**) Kensington Kensington Expert Mouse: Applying InputClass "evdev pointer catchall"
[ 28717.859] (**) Kensington Kensington Expert Mouse: always reports core events
[ 28717.859] (**) Kensington Kensington Expert Mouse: Device: "/dev/input/event2"
[ 28717.880] (II) Kensington Kensington Expert Mouse: Found 8 mouse buttons
[ 28717.880] (II) Kensington Kensington Expert Mouse: Found scroll wheel(s)
[ 28717.880] (II) Kensington Kensington Expert Mouse: Found relative axes
[ 28717.880] (II) Kensington Kensington Expert Mouse: Found x and y relative axes
[ 28717.880] (II) Kensington Kensington Expert Mouse: Configuring as mouse
[ 28717.880] (**) Kensington Kensington Expert Mouse: YAxisMapping: buttons 4 and 5
[ 28717.880] (**) Kensington Kensington Expert Mouse: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200
[ 28717.881] (II) XINPUT: Adding extended input device "Kensington Kensington Expert Mouse" (type: MOUSE)
[ 28717.881] (II) Kensington Kensington Expert Mouse: initialized for relative axes.

Some links I've been using:


Permalink 08:22:00 pm, by mholmes, 150 words, 136 views   English (CA)
Categories: R & D, Documentation; Mins. worked: 0

Syncing Lightning calendars

I was happily sharing my Lightning (Thunderbird plugin) calendar by copying the calendar-data/local.sqlite file between home and work computers, but an update to Lightning made that suddenly impossible; even with the correct sqlite db in place, Lightning would show a completely empty calendar. Eventually I figured out what to do:

  • Shut down the Thunderbird which has the problem.
  • Install sqlite and sqlitebrowser (from the repos).
  • Open the db file, and find the cal_id field value in one of the tables (such as cal_alarms). Copy it somewhere.
  • Open prefs.js, and search for any entries with "calendar" in them. There should be a block of them, and they'll all look like this:
    user_pref("calendar.list.sortOrder", "5d27f585-81fb-4f2c-8686-cf28b7ee5a0a");
  • Replace the id value (the long GUID) with the cal_id from the Sqlite db. Save, then start Thunderbird.
Permalink 10:46:14 am, by Greg, 117 words, 175 views   English (CA)
Categories: Activity log, Documentation; Mins. worked: 30

@font-face recommendations

I'm recommending the following be adopted by HCMC as the way to embed fonts in a web page. It isn't my work - it's this guy's. His arguments are good - you should read the article.

Here's what to do:
Go to and create a webfont package using their @font-face generator. The generator will produce a comprehensive package for you to use in your site, including a stylesheet with the @font-face declaration in the above-mentioned format. This is the recommended method:

/* "Embed" Leander Regular */
@font-face {
  font-family: 'LeanderRegular';
  src: url('leander-webfont.eot');
  src:  local('☺'), url('leander-webfont.woff') format('woff'), url('leander-webfont.ttf') format('truetype'), url('leander-webfont.svg#webfontXxzioThE') format('svg');
  font-weight: normal;
  font-style: normal;


Permalink 08:40:06 am, by mholmes, 86 words, 110 views   English (CA)
Categories: R & D, Activity log, Documentation; Mins. worked: 30

New scanner working with Lucid

Attempting to get the new scanner working with Lucid. This is what I tried:

Rebooted, and it works. No need for all this with Maverick, though -- works out of the box.


This blog is the location for all work involving software and hardware maintenance, updates, installs, etc., both routine and urgent, in the server room, the labs and the R&D rooms.



 << Current>>
Jan Feb Mar Apr
May Jun Jul Aug
Sep Oct Nov Dec

XML Feeds