I've been having freezes of my machine quite regularly, when Java goes nuts and eats all the CPU and memory, so I finally decided to do a dist-upgrade to see if that would fix it. Everything went smoothly, but it didn't actually solve the problem. In the end, I upgraded my Tomcat from 6.0.20.0 to 6.0.26.0, and that seems to have solved it. Must have been some kind of memory leak in Tomcat. But I've also moved the projects I'm not actively working on out of the webapps folder, to reduce the load.
Following a procedure I'd already tested on my home computer, I upgraded VirtualBox like this:
- Checked that the existing version and the target version were both available in the Synaptic Package Manager.
- Shut down all VMs, took snapshots, and backed them up. This includes backing up the contents of ~/.VirtualBox, which is on the root drive (while the VMs are on the second drive).
- Using Synaptic, removed the old version, and then installed the new version.
- Started VBox and tested the VMs.
I'm posting my experiences with this device, which I've recently bought for home, because at some stage we may consider one of the Synology devices as the successor to Rutabaga. Also the blog now substitutes almost entirely for my memory, so if I don't post this stuff here, I'll never remember what I did.
Initial setup
- Undo the thumb screws at the back; the rear panel hinges down, and the cover comes off.
- Extract one of the drive trays.
- Put in your drive (mine is a 1.5 TB Seagate -- I only have one at the moment, but there are trays for four).
- Screw in four screws to secure it to the tray.
- Slide the tray back in.
- Screw in two screws to secure the tray.
- Put the cover back on, and screw the back up, including the cable clamp which comes as a separate piece in the box.
- Plug in the device and start it up.
- Connect ethernet and power (a HUGE brick, but no matter).
- Run the install.sh to install the software on your PC.
- Start the software.
- It finds the DiskStation on the network, and you go through a basic
- setup which amounts to choosing DHCP or not, and giving it an admin pw.
- You install the firmware on the DiskStation from the CD (it's a
- "patch file" with a .pat extension).
- The DiskStation restarts itself.
- Then you just go to its web interface (on port 5000, although it redirects you there from 80).
- I set up my router so that it always gives the Synology the same ip address, so that it's easy to find.
Next steps
My drive was uninitialized, so the first thing to do was to create a volume. In this case, because I have only one drive, the only kind of volume I could create was a "Basic volume"; if you have more drives, you can create JBOD or RAID volumes of many kinds. My plan is to keep all my drives as separate volumes, and have them backing up to each other, so that I have lots of copies of data on standalone drives that I can pull out of the DS and mount elsewhere if necessary.
It takes a long time to create a volume; it formats the drive (ext3) completely, rather than using a "quick format", and presumably checks every sector. It then mounts the volume as "volume1".
While that was going on, I turned on SSH, and logged in from the terminal. Only the admin user can log in as SSH; my guess is that "users" (see below) are not real users on the BusyBox system it's running. I was able to determine that it's running BusyBox, and that the Ash shell you get is pretty limited in what it can do.
Updating the firmware
A new version of the firmware had been released since the CD I got with the device, so I downloaded it (a .pat file), and installed it through the web interface. Worked a treat. I got a couple of new features with it.
Creating users and shares
- In the web interface, you can create users in a straightforward manner. I created two.
- There's a guest user, but it's off by default, and I left it off.
- I then created some shares. These are just directories on your volume, to which you assign users permissions. It's completely straightforward.
- For each share, you get to set privileges, and also set NFS privileges separately. I turned on NFS for all my shares.
There's a web-based interface for file management called FileStation, which is handy if you need to upload a couple of files quickly, or look through the directories, but mainly I'll be mounting drives and using robocopy and rsync to back things up.
Mounting drives on Linux
- Had to install the NFS client first (
sudo apt-get install portmap nfs-common
). - Then I created directories in my home folder, one for each share on the Synology.
- I mounted the directories like this (using the IP address of the Synology):
sudo mount 192.168.xxx.xxx/volume1/mholmes /home/mholmes/DiskStation/mholmes
- This mounted instantly -- so quickly I thought it had failed, but I copied a file to it, then looked in the FileStation interface and the file was there.
- Eventually I'll set up these mounts in
fstab
. - To get some of my key data backed up as quickly as possible, I just copied and pasted hundreds of GB of stuff from my local machine to the DiskStation, using Nautilus. The copy was flawless, and faster than anything I've seen before on my home network.
Mounting drives on Windows
- On Windows, you just go to
\\DiskStation\volume1\whatever
, in Windows explorer, and it prompts you for a login. It neededDiskStation\[username]
rather than just [username]. - I mounted the network drive, and set it to reconnect at logon.
- I ran a huge Robocopy operation, and saw it go much faster than the same operation had ever gone with the TimeCapsule that died last week. In fact it seems to be backing up over the network just as fast as it backs up to its own internal D drive.
Lots more stuff to learn -- I have to set up the scheduled shutdown and startup (a wonderful feature), figure out which of the built-in apps I might want to use, and figure out how I can make it back up between its own drives (for which I need another drive, of course).
We have long struggled with the issue of how to provide easy access to data on the TAPoR servers for students and researchers working on projects. The ideal, especially for XML markup projects, is to have a TAPoR share mounted as a network drive; this enables oXygen to provide an easy project editing interface through "Link to external folders", and files can be opened and saved transparently on the server in this way. On Linux this is (naturally) easy. This is what I have learned from trying to do this for CC on her Mac and Windows laptops over the last two days:
- In our labs, we use taporshare.tapor.uvic.ca, which provides an SMB connection. However, this is problematic for laptops and remote locations, because SMB is insecure, and it's also blocked by default on the UVicDefault wireless network.
- Macs do not support SFTP connections out of the box.
- Oxygen promises to support opening and saving of files over SFTP, but it seems to work only for one file at a time, which makes validation difficult, and it's difficult to set it up so you can browse the folder tree on the server. After a restart, oXygen also seemed to lose the logon credentials and be unable to ask for them again, causing "Auth fail" errors. Hopeless.
- MacFuse and MacFusion (search the web) worked well for me on Snow Leopard from home, connecting to Lettuce, but were hopeless on Leopard; we had endless authentication issues, and were unable to save files to the server.
- On Windows, you can use Dokan SSHFS (search the web) to mount a share (e.g. Lettuce, or nfs.tapor.uvic.ca) over SSH. This works OK, except that the GID setting is not respected when creating new files; they end up as user:user instead of user:project. Still, it's better than nothing.
- On the Mac, the only solution we could find was to use Transmit, which costs money, but which will allow you to open a remote files over an SFTP connection. The only issue here was the problem with validation; the relative link to the schema did not work, of course, because the file was opened in oXygen from a temp folder. We were able to work around it by placing copies of schemas in the temp folder, in the right relationship (fortuitously) to the place where Transmit stashes its temporary files when you open a file from the server.
All in all, very unsatisfactory. This precludes people working remotely on our project data unless they're relatively sophisticated and can use something like subversion, can be relied upon to use a client such as WinSCP or Cyberduck correctly, or are using Linux.
There are a couple of CANJAS lectures scheduled while I'm gone, and Dr. Iles will be handling the hardware. He asked me to write up a brief outline of what to do:
Placement.
In the boardroom, the optimal location is at the window end of the room; the curtains are not sufficient to stop glare on the screen.
I generally need to get in to the room a bit ahead of schedule to move the table and chairs around to fit the VC equipment in to the space.
Network.
The boardroom is already sorted out, network wise, and should just work. That said, the network port is at the opposite end of the room from the best location for the VC equipment. There is a long network cable for this (the end of which is sticking out of a blue-grommeted hole in the bottom left side of the cart). I run it under the tables to keep it out of the way.
*** Networking note: If another node is dialing (cascade call from UofA or similar) in you'll need to know your IP address beforehand and give it to the dialer. If you dial out (make your own call to another node) your IP doesn't matter.
Audio.
in the boardroom, you should be able to get away with a single microphone, but if you have a full house you might want to use two. I run the cable (sticking out of the same hole as the network cable) down the middle of the table (on the floor, running it up to the mic's location). Remember that the mic displays a green light when broadcasting and a red one when turned off. They can be manually turned on and off with the switch on each mic, or through the remote mute function.
*** Audio note: the mic cables plug in to the mic only one way. It's a bit like an S-Video cable in that it requires a delicate touch to attach. Take a close look at the cable end and the mic before trying to connect them.
Video.
The VC equipment is plugged in to the S-Video plug on the TV (which displays as "AV1" on the TV screen). Use the "Source" button on the TV remote to adjust the input.
The bigger VC box (on the right) has a power (toggle) switch that should already be on. When you plug in the cart and turn on the power on the VC remote (silver one) the camera should turn around a bit and settle in a forward position (away from the TV). If the TV is on the right input you should see a view of the room on the TV screen.
If the blue Sony IPELA screen is displaying on the TV, you'll notice in the lower middle section of the screen some info about the network connection. It should display an IP address here - if it doesn't, you don't have a connection. Fix this first!
To dial in to a node, use the IPELA remote and choose "Phone Book" in the top right of the main screen. When you click it, you get a new screen with about 6 big, cryptically titled, icons (Sony, Clare@UofA etc.). Look at the bottom left of the screen and you'll see the IP address that will get dialed if you choose one of the icons. (***Remember, if UofA is going to do a cascade call (dials you) you need to know your IP address and make sure the outside caller knows what it is.)
If you need a separate monitor for a Powerpoint presentation, you need to make sure that it's working ahead of time. I normally dial up UofA and ask to do a test to make sure everything is functional.
To connect things, put the small LCD monitor where you want it and connect the VGA extension cable to the one already on the monitor. The extension cable is already attached to the SONY box; you'll find the cable inside the cart.
My RC will start timing out next week, so I'm building a new VM for Win7/64, to do testing and QT builds on. Quicker and easier than any Windows installer from the past.
Took Greg's new build of XEP and integrated it into the ACH project, as ported to the new Cocoon/eXist build by MJ. It works as expected. This is what I had to do:
- Take the XEP configuration data, which includes the docs, examples, fonts, hyphenation data, images, lib and crucially the
xep.xml
file which configures usage of fonts etc., and put that in the[cocoon]/resources
directory. Much of this (docs, examples, etc.) could be discarded, but it's not a bad thing to have the docs and examples available for anyone who needs to maintain the project in the future. - Put the three XEP jars that form the Cocoon connector into
[cocoon]/WEB-INF/lib
(xep.jar
,XEPSerializer.jar
, andxt.jar
). - Added the serializer data to the sitemap:
<map:serializer mime-type="application/pdf" name="fo2pdf" src="com.renderx.xepx.cocoon.Serializer"> <parameter name="com.renderx.xep.CONFIG" type="string" value="/home/mholmes/apps/apache-tomcat-6.0.20/webapps/ach/resources/xep/xep.xml" /> </map:serializer> <map:serializer mime-type="application/postscript" name="fo2ps" src="com.renderx.xepx.cocoon.Serializer"> <parameter name="com.renderx.xep.CONFIG" type="string" value="/home/mholmes/apps/apache-tomcat-6.0.20/webapps/ach/resources/xep/xep.xml" /> </map:serializer>
This obviously uses full paths on the local drive, which would need to be updated when moving the project; I'm now going to see if I can figure out how to make these paths relative, using Cocoon protocols.
Update: no way to avoid literal paths
It seems that there is no way to get around the requirement for literal paths in the sitemap. In view of that, I've relocated the xep folder so that it's a child of /site/. This reduces the changes we need to make to the root Cocoon to a minimum; now all that needs to happen is that the jars get dropped into the WEB-INF/lib directory. That's perhaps the best we can do for the moment. Eventually we hope to move to FOP, which is intelligent enough to read its configuration from a relative path.