RE provided new space to double the available drive space; followed my own instructions here to extend the partition. No problems at all.
Category: "Activity log"
Today I blew up a couple of the apps and had to restart them, through doing this the wrong way. When you have a new XAR to deploy:
- Use Chrom*, not FF.
- Connect over the internal URL on :8080.
- Upload the new package.
- If it goes wrong and you see an error message, the chances are the db is now set to read-only.
- If that happens, try shutting down the db from the web interface. If that works, restart it from /etc/init.d/jetty. If it fails, you may need to kill all the relevant processes on Peach before restarting.
With these big XARs, we may need to consider testing an alternative process where we uninstall the old XAR and then put the new one in the autodeploy folder before restarting eXist.
The eXist team tagged 3.1.1, so I've rebuilt our template from that tag, tested it, and pushed it to the existDeployer folder on home1t.
Our MySQL server was down ("too many connections"); spent some time reporting, investigating, and fielding and responding to queries from users. In the end a restart fixed it.
Tested a build of the dev branch with my script and deployment stuff locally; all good, and the bug with the java client is fixed.
This has been a relatively long process to figure out how best to configure a Jetty/eXist instance to run happily alongside others, on a test domain, and how to test that setup. This is what I've done:
- Install apache locally from the repos.
- Install mod_jk from the repos.
- Turn on SSL (
sudo a2enmod ssl) and set up a self-signed cert (lots of docs on this available).
- Set up test domains in the local hosts file:
127.0.0.1 localhost 127.0.1.1 spud 127.0.2.1 test-internal.hcmc.uvic.ca 127.0.3.1 moeml-internal.hcmc.uvic.ca
- Set up virtual domains in Apache -- example sites-enabled/test.conf:
<VirtualHost 127.0.2.1:80> ServerAdmin webmaster@localhost ServerName test-internal.hcmc.uvic.ca ServerAlias test ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost on ProxyPass / http://test-internal.hcmc.uvic.ca:8080/ nocanon ProxyPassReverse / http://test-internal.hcmc.uvic.ca:8080 AllowEncodedSlashes NoDecode </VirtualHost> <VirtualHost 127.0.2.1:443> ServerAdmin webmaster@localhost ServerName test-internal.hcmc.uvic.ca ServerAlias test SSLEngine on SSLCertificateFile /etc/ssl/certs/spud.crt SSLCertificateKeyFile /etc/ssl/private/spud.key ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost on ProxyPass / http://test-internal.hcmc.uvic.ca:8080/ nocanon ProxyPassReverse / http://test-internal.hcmc.uvic.ca:8080 AllowEncodedSlashes NoDecode </VirtualHost>
- In these four files in the Jetty instance:
tools/jetty/etc/jetty-http.xml tools/jetty/etc/jetty-ssl.xml tools/jetty/etc/standalone-jetty-http.xml tools/jetty/etc/standalone-jetty-ssl.xmlchange the
<Set name="host">test-internal.hcmc.uvic.ca</Set>. (I think only the first two matter for our purposes, but it does no harm to change the others.)
- Start the Jetty instance, and restart apache. Access the jetty app on test-internal.hcmc.uvic.ca.
I still have to test this with a second Jetty running side-by-side on a different domain; I'll do that tomorrow.
JA called to report that the config on the CCAP db had been hosed by the visit of a robot to a specific URL, which triggered something completely unexpected and unwanted. He was able to recover it, and add protection against a similar event both on CCAP and on LOI. The AtoM documentation is apparently not working properly so we can't find anything on their site about this particular "feature".
The attempt to build a Jenkins server using a pure shell-script approach, which worked for Ubuntu 14.04, is now problematic for 16.04 and in particular for current versions of Jenkins, so I'm taking a different tack and trying to create a Docker image. Early steps are going well; it remains to be seen if I can get the whole thing to work, but in the meantime the learning is generally useful.
As we move towards deploying standalone eXist/Jetty applications for our projects, we're figuring out how best to configure them. One issue is that we're probably going to want to point the subdomain (
graves.uvic.ca or whatever) at the
/apps/graves/ subfolder, but we're still going to need access to some of the default eXist applications such as eXide and the dashboard. This can be accomplished by adding the following line to the
<root pattern="/apps/graves/apps" path="xmldb:exist:///db/apps/"/>
Add it as the first entry before similar entries. The effect of this is to leave all the existing graves app functionality handled by the
apps/graves/controller.xql, but hand anything accessed on
/apps/graves/apps to the appropriate app controller. My testing with eXist 3 RC1 confirms that this works; it should mean that on going live, the dashboard (for instance) would be accessible on
graves.uvic.ca/apps/dashboard (and we can access it over TLS for better security when logging in).
MC reported he'd had to restart tomcat-devel several times, so we started looking through logs. I think I've traced the current Tomcat issue to a weird combination of circumstances.
Way back when, the Map of Early Modern London project used to be a PHP app. It had URLs like this:
We turned it into an eXist app, with proper page URLS like this:
I wrote some cunning logic in the app to detect requests to the old URLs and redirect them to the new. However, the logic never fired in normal circumstances, because Apache on the front-end saw ".php" and passed the page off to the PHP interpreter.
To deal with this, I and someone from Systems cooked up an Apache rewrite rule in the virtual host, which would turn the old URL into the new. However, it's not very sophisticated; it treats search parameters very crudely, so it turns this:
which throws up errors when eXist sees it, naturally. Now, it turns out that this particular URL, presumably along with some other similar ones, had been links on the old site way back when, and are now in places such as archive.org, and some bot somewhere has been hitting these URLs, and generating a lot of errors.
I think at this point in time it might make sense to revisit that old rewrite rule and make it much simpler, so that it converts anything which contains the string:
and doesn't worry about trying to parse it at all. Wrote to MC to see if he thinks that makes sense.