The issue of linking inside and outside the site is something that really needs to be dealt with. Right now, there's a proliferation of different types of link, specified by the @type
attribute on the <ref>
element; this is the range of instances currently in the XML files (some obviously typos, and some errors):
internal bibl detail external 'detail pers name CORN1 library special topics detal SIRT1 URI
I propose the following:
- The @type attribute should just be dropped, and deleted from existing
<ref>
elements. - There should be two types of reference, distinguished by what their @target attribute looks like:
<ref target="mol:ADAM4">
<ref target="http://some.url/">
- When the output (XSLT) system sees
@target
starting withmol:
, it knows it has an internal reference. In that case, it constructs a simple link consisting of the relevant @xml:id (which issubstring-after(@ref, 'mol:')
) followed by.htm
. - The
controller.xql
andindex.xql
files are able to figure out where the relevant@xml:id
is, and process it accordingly. So, for instance, if it finds a TEI file in/db/data/
which has that @xml:id attribute on its root element, it processes that; if it finds a person element in PERS1.xml which has that @xml:id, it processes accordingly. - When the output (XSLT) system sees @target which does not start with
mol:
, it assumes an external link, and creates an anchor accordingly. - In the case of name tags, which also give rise to links to people, I have already converted all @key="ADAM4" instances to @ref="mol:ADAM4", and links are being created accordingly.
This means that all pages are being served out of the root of the site, and as long as all @xml:id
s are unique, everything is accessible in that way. There may still be a couple of details to work out relating to @type
on <name>
(which is possibly worth keeping, although I don't think it's being used for anything useful at the moment), and the various uses of <bibl>
(which I think in some cases ought to be <ref>
s).
Waiting for a document to arrive that needed a rapid response.
Wrote a simple table sort library in JavaScript, that does something similar to what the Adaptive DB table sort does, but which is much slimmer and faster because it simply decides whether a column is numeric or textual, and because (unlike the Adaptive DB tables) rows are sorted individually rather than in pairs. This should be a plug-and-play utility -- add the JS file to the header, and add the class "sortable" to any table, and it does the rest. Right now it's working on the list of ids and the contributor list, and I'll probably use it on the "index of the site" page, which has lots of little subsets of items.
LSPW pointed out that headings in chapters which are nested in <div>
s to the point where they become <h3>
elements were not centred, whereas those which become <h2>
s are centred by default. I added text-align: center
to the h3 ruleset in the stylesheet, but then we discovered that reference items have <h3>
headings, and those should be left-aligned, so I added another ruleset to set them back to left-aligned.
Another pass through the detail and the summary.
To make sure Tomcat starts with UTF-8 instead of the default, I put a script in [tomcat]/bin called utf8startup.sh:
#!/bin/bash ./startup.sh dFile.encoding="UTF-8"
Running this instead of the standard startup.sh helps to avoid character encoding issues.
Data hassles (see my previous post) prevented me from leaving until I could get a good copy of all my data, and back it up successfully.
Today I hit a major problem with WebDav: when I tried to get good working copies of my XML files out of the db, by mounting it as a share through WebDav, most of the XML documents came out with dozens of trailing #0 characters, which of course are very hard to deal with; they make the file invalid, but you can't easily search-and-replace them. I had to unmount and remount several times, and copy-paste the files in small batches, to get a good copy of my data. It took me ages to clean it up and confirm it was clean. I need to edit externally because of the XInclude issue mentioned in my last post, so I'm now going to abandon the data explorer approach, and work in a more conventional way:
- All data in SVN (not set up yet).
- Checkout / edit / commit.
- Upload to db when ready.
I'll make sure the SVN permissions are set so that the site data can't be changed by the data editors.