I got a replacement drive for rutabaga and got RE to do the actual driving on the array rebuild. It actually *was* trivial. Here's what we did:
1) run "hdparm -i /dev/sde" which will provide details on the device sde. We need the serial numbers for each drive, so do that for each drive.
2) Compare the serials to the drives in the machine. The one you don't have a serial for is the dead drive - it doesn't show up when you look at /dev and hdparm can't read it to get a serial.
3) pull the dead drive and replace with a new one
4) boot the machine and dump a partition table from a good drive to the new one like this: sfdisk -d /dev/sdb | sfdisk /dev/sdc. This gives us identically sized partitions to work with.
5) add the new disk to the array like this: mdadm /dev/md0 -a /dev/sdc1
6) check progress on the rebuild like this: cat /proc/mdstat
I've downloaded the seagate tools app (seatools_cli.tar) and I'll run it on the dead drive per Seagate's instructions when I get back. Once done it provides a number that I quote when I RMA the drive. My understanding is that I need to do this in order to ensure a successful RMA.
RE has mustard rebuilt, but we didn't get 64bit because the proc won't handle it.
Next step is to get the load balancer set up to handle the cluster.
After that, each existing virtual host will get an analog virtual host (like vihistory2.uvic.ca) that gets handled by the load balancer/mustard.
Once each new VH is tested we can migrate the actual VH over and let mustard take over.
Once all VHs are migrated we can bring down lettuce and get it prepared for rebuild.
RE can set up www-dev like it is on the UVic web cluster (that is, some.machine.uvic.ca:8080 is mapped to the www-dev folder of an account), and also set up a method for accessing VH-less apps via the usual ~username (e.g. http://hcmc.uvic.ca/~someproj points at the someproj user account/www).
We also gain the advantage that all such sub-domains get the advantage of the wildcard cert we have on hcmc.uvic.ca.
Both Greg and I have the VMWare vSphere Client 4.1 installed in Windows 7 VMs. The installer has to be run in XP compatibility mode, but the app itself seems to run fine. This will eventually be our way of doing the limited number of things we're able to do in terms of managing VMs on the VM server (fennel.hcmc.uvic.ca).
A drive in the raid array on rutabaga has failed; rutabaga sent me an email that said "A Fail event had been detected on md device /dev/md0. It could be related to component device /dev/sdc1". Good timing, as Stew is gone and I'm going on vacation in two days.
However, Martin and I have a plan - hopefully we'll have a replacement drive installed tomorrow. I'll RMA the current drive and pop it back in when convenient. Then we'll have a spare if/when this happens again.
Have made considerable progress.
I can log in to a desktop environment, add an ldap user to local groups and define a homedir. The homedir isn't very useful yet (everyone gets the same home directory) but I'm hopeful.
Investigating the issues around dynamic creation of a home directory I see that I might want to use nslcd (which requires libpam-ldapd and libnss-ldapd, instead of libpam-ldap and libnss-ldap - notice the lack of a final d) as it appears to be more flexible - see here: http://arthurdejong.org/nss-pam-ldapd/nslcd.conf.5
I can't seem to make pam_mkhomedir work in conjunction with
nss_override_attribute_value homeDirectory /home/netlink/ (where /home/netlink is a string). Ideally, I'd like to do the override and then have pam_mkhomedir append the netlink id to the string.
This is downright thorny, but I'm slowly getting somewhere.
CS has made adjustments to the dev ldap setup so that we can get the info we need without excessive modification to our build (that is, we can make anonymous requests for info from the ldap server).
All the interesting stuff gets done after installing some packages. It looks like there's a meta-package called ldap-auth-config that should install all the necessary stuff - primarily the pam stack.
Once installed, the config is done in /etc, and mostly in /etc/pam.d/
I'm making headway, but very slowly - documentation is non-existent.
This blog is the location for all work involving software and hardware maintenance, updates, installs, etc., both routine and urgent, in the server room, the labs and the R&D rooms.
|<< <||Current||> >>|