Did some research Monday and then met with Jennifer today.
Contribute appears to have no way of allowing you to see the file-and-folder tree, so the only way to address a file included by a php call is to manually enter the url of the included file into the URL box in the Contribute editor.
If the file is a well-formed xhtml structure, though not a full dom, (e.g. an unordered list), then the file will open for reading though what appears may be the xhtml code or may be the default parsing and representation of a list. The file can also be edited, and always is parsed for presentation in the editor (e.g. lists show up as lists and links show up as coloured, underlined text, neither show up in the editor as xhtml code). Changes are saved OK.
Jennifer OK with editing the section nav bar files in each section in this way if necessary. We discussed treating certain elements of page content which are repeated (e.g. department listing) by decided to leave them as duplicate hard-coded instances for the time being.
We need to run customized XSLT transformations based on path (so that e.g. teiJournal/apa/doc.htm gives us a page rendered using the APA style). My intention was to pass the directory name into the XSLT transformation as a parameter, which I can do, but I can't then use that to selectively import other XSLT files because imports happen before params can be declared.
The solution, therefore, is to have an apa/xhtml.xsl file, which then imports a range of files, including one which is the framework file (containing the root template match). This will (I hope) enable me to use the same basic page code, down to the body tag, from a single file in the xsl folder root, but then render the details using files in the apa folder. I'm working on that now.
Angela's enormous video capture/rendering project is taking her a long time, and Richard and I decided to see if we could speed up the process using Visual Hub and XGrid. We installed Visual Hub on Braeburn, then enabled XGrid and opened up the relevant ports on Cortland and Spartan. However, the process doesn't seem to work. Spartan and Braeburn were unable to see each other at all; Braeburn and Cortland were able to see each other, and Cortland could treat Braeburn as an XGrid controller, but actually running processes under Visual Hub didn't work; Cortland showed processes "Pending" but never ran any, and Braeburn seemed to do nothing. Worth trying, but in the end we gave up. I turned off the Visual Hub ports and the XGrid service on Cortland and Spartan.
JT sent over a PDF document containing a review of the ScanCan site, by an acquaintance who works in IT. Took a detailed look through the review, and responded directly to JT.
Dealt with a fairly complicated problem this morning, and it should be documented in detail because it's the sort of thing you can only figure out by trial and error. The problem is this:
We need to store user preferences in the eXist database, because they should be easily backed up, and they should be editable through an XQuery/XUpdate-based GUI (in the long run). By preferences here, I mean a range of different things, including user strings (labels and captions for the GUI), colours and fonts, and straightforward settings choices, such as the choice to use APA style. We've already figured out how to store CSS information in <xsl:attribute-set> nodes, then use XQuery to retrieve it formatted as a CSS file for the browser; that's a relatively simple issue, because the browser will always request the file directly, through a call to a URL which triggers a Cocoon pipeline. A more complex issue concerns strings for GUI captions etc. These are typically required DURING an XSLT transformation. An added wrinkle is that there are default values and possible user overrides, and the system needs to be able to deliver a set of values where the user overrides are chosen if they exist, but the defaults are returned if they're not.
This is the way I'm doing it:
First, I store the two sets of strings in two separate files in the database:
/db/teiJournal/settings/default/strings.xsl
/db/teiJournal/settings/user/strings.xsl
The format of these files is straight XSLT 2.0, and each file consists simply of a list of <xsl:variable> elements. Next, I create an XQuery file, getGuiStrings.xq, which can merge the two files to create one file, with user values where they exist, and default values where they don't. This is the meat of the file:
declare function f:getStrings() as element()*{
for $defV in doc('/db/teiJournal/settings/default/strings.xsl')//xsl:variable
let $varName := $defV/@name,
$userV := doc('/db/teiJournal/settings/user/strings.xsl')//xsl:variable[@name=$varName]
return if ($userV) then
$userV
else
$defV
};
(:
===================================================
DOCUMENT NODESET
---------------------------------------------------
:)
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="2.0">
{f:getStrings()}
</xsl:stylesheet>
(:
======================== END ========================
:)
Then we need a sitemap pipeline which makes it possible to access the output from this XQuery through a URL:
<map:match pattern="xsl/db/guiStrings.xsl"> <map:generate src="xq/getGuiStrings.xq" type="xquery"/> <map:serialize type="xml" /> </map:match>
Finally, the actual base XSLT file on the filesystem, which is called when producing output, must be able to import that file. That took a little figuring out; it requires the use of the cocoon:/ protocol:
<xsl:import href="cocoon:/xsl/db/guiStrings.xsl"/>
Without the Cocoon protocol, it won't work because the XSL engine will look on the filesystem instead of invoking the Cocoon pipeline.
Got this working with a simple example using the plain text rendering I built yesterday.
Dr. T dropped by with a copy of volume 16, fresh from the printers. On first look, it seems a much better job than the last one -- the colours on the cover are good, and the book is the right size. The pages look as though they've been printed more consistently too; there's none of the misalignment between lines on the recto and verso that we saw in the last volume. A definite step forward.
I have a basic (or rather better than basic) text rendering system in place now. The difficult questions are what type of information should be included or left out (for instance, links to external documents need to have their targets rendered, but links to internal components such as the ids of biblio items need not be included), and how to present information in such a way that it's passably complete and detailed, but not actually in any particular style (because we're going to have only one generic output format for text).
The finished system covers all the tags we've used in the two articles so far, and does a reasonable job, IMHO, of the bibliography. Spacing is rendered quite well, as is punctuation; the routines for handling these will come in handy when I move on to the XHTML, which comes next.
Tried setting up an ialltjournal folder in home1t on Lettuce, but found that I couldn't get the Cocoon URLs to work, so I think there's some config that needs to be done by Greg, or perhaps it's required that the folder be the home folder of an actual user. In any case, I can wait till next week when Greg's back before I set that up. In the meantime, I can work on XSLT etc. in the teiJournal folder of my own account.
Set up a basic sitemap and some XQuery to retrieve a document, and tested serializing as XML, which works fine. Then I started working on the text output, which is not crucial (it's only really intended for text analysis) but makes a good learning tool for looking at all the textual features we're encoding.
No big surprises, although both appendices seem to me to be representations of documents that might better be shown as screencaps or scans.