Cleaned up and tweaked the final code, and changed the XML links on the site itself so that P5 is shown instead of P4. Had some back-and-forth with the TEI list about how to encode email addresses (there is no formal method in P5), and reported a bug in one of the TEI conversion stylesheets to James Cummings.
...spent helping Stew and Greg figure out problems with the London Maps project (main issue was a mismatched eXist client which couldn't talk to the db properly, so queries couldn't execute).
This is the current status of EMLS:
Before we can set up the eXist db for EMLS, we need the 1.1.x release of a build of Cocoon with eXist built as a block. Wolfgang has promised that it will be released any minute now.
Meanwhile, the P4 format of the existing EMLS files has become obsolete; they should really be converted to P5 before we go forward with a site. I've been working on XSLT to convert the ScanCan code to P5 this week, and that is now working; I'm going to move next to the ACH abstracts, elaborating the XSLT as I go, and finally I'll deploy the same code against EMLS so that we can at least start the project with good P5 XML. That should be done (I hope) around the same time that the new eXist version is running on Lettuce, at which time we'll be able to start work properly.
If you'd rather leave the code as P4 and work with that, then let me know and I'll start writing stylesheets for the Web interface. I don't really mind either way. I think P5 has more long-term stability, but it will take a little longer to do the conversion (an additional ten hours or so, conservatively).
To build the entire Web interface, I would estimate something like:
1. Setting up the DB: 4 hours.
2. Index/TOC page: 4 hours.
3. AJAX search page: 8 hours.
4. Article display (XQuery/XSLT/XHTML): 20 hours.
5. Site styling/CSS: 20+ hours (depends largely on what Ray wants -- apparently simple requests for particular display features can eat up days with CSS).
6. Peripheral pages (About, Contact, etc.): 5 hours (assuming content is readily available).
7. Debugging, cross-browser testing, fixing and tweaking: 10 hours (very roughly).
8. PDF output (if required): 20 hours.
That's a total of just over 100 hours, give or take.
I was initially trying to create an xmlns attribute on the root element. This turns out to be the wrong approach. I found confirmation of this on the mulberrytech xslt list and xslt FAQ:
"What you can't do is to use xsl:attribute name="xmlns:xyz". Namespaces are not attributes."
http://www.stylusstudio.com/xsllist/200501/post30340.html
"xmlns is NOT an attribute"
http://www.stylusstudio.com/xsllist/200408/post50470.html
"namespace attributes show up on the namespace axis, not on the attribute axis."
http://www.dpawson.co.uk/xsl/sect2/N5536.html#d6682e2140
Jeni Tennison suggests that
[quote]
To create an html element with no prefix and in the XHTML namespace, you need to declare the XHTML namespace as the default namespace within your stylesheet:
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns="http://www.w3.org/1999/xhtml">
[/quote]
http://www.dpawson.co.uk/xsl/sect2/N5536.html#d6682e1669
Following this approach, I added the TEI namespace to the root of the stylesheet. Initially, I figured this wasn't working (looking at the document in Firefox, as it was rendered by Cocoon), and I hacked around for ages trying to figure out why there was no xmlns attribute; eventually I realized that you have to view the source of the page to see it, because in its default rendering of an XML document with no style, Firefox suppresses the xmlns attribute!
I 'm done for today. There are errors on the text "Sonnet de Courval" I had some problem to figure out what was wrong with the title coding.
See you ext week
France
...spent helping Stew figure out some more of Mike's XQuery code.
Worked all day to write XSLT to convert the P4 in this project to P5. There are two purposes to this:
- Provide P5 source XML on the site, for future-proofing.
- Create P5 articles we can use when working on a future publishing project.
Began by looking at the sample fragments on the TEI Wiki; their approach is to string together lots of tiny stylesheets which each deal in detail with one item or element. That's too complicated for our situation, where our P4 is quite simple.
Built my own stylesheet from scratch, and finally got all articles validating against a P5 schema created from ROMA. In the process I found and fixed some inconsistencies in the original article markup (mainly ambiguous date formats).
Then I added a pipeline to the sitemap to generate the P5 output for any article. All works OK, except that the final stage in the pipeline, which should add an xmlns attribute to the root element, just doesn't work. I've read around all the relevant lists and sites, followed every suggestion I can find, and tried both XSLT 2.0 and 1.0, but nothing seems to work; there's just no way to add that xmlns attribute to the root of the document. I'll have to post a couple of questions on the gmane XSLT list and on the TEI list and see if anyone can come up with an answer. But this is quite a minor problem.
Next, we add links to the site to view the P5, and test conversion of the teiCorpus document for a full volume to P5.
Took what I'd learned in previous posts and applied it to other instances of the id() method call on FrancoToile: specifically the bookmarks, search-in-page and transcripts files. Martin helped, particularly with the two-level call needed in the search-in-page instance.
With Greg also investigated differences in eXist configuration files between the lansdowne and FrancoToile instances - can't find any that explain the different behaviour, so now the mystery is how the lansdowne video site is working.
Have tested fairly thoroughly now against the lansdowne site and appear to be getting identical behaviour.
Still haven't investigated whether any of this will help with the problem encountered on the Map Of London site.
Next will be adding the advanced search GUI for filtering videos including grabbing the legal values for each control, and writing the xquery based on the state of those GUI controls
History dept sent me some images for including in top-left corner of their site on various pages.
Confirmed with Clifton the details on the procedure for dealing with the eye-candy images for the site. All images had to be sized and placed on coloured bg to blend with rest of banner. The code in the page for images to be included in collection of rotating images are treated one way; the code in pages for images which are limited to only one person's page is treated differently. The JS file has to be modified when images are added to the pool being rotated.
Eventually each faculty member will have their own fixed image. Until then, Clifton has copied the banner_images folder from the root into the faculty directory so that for those faculty without a fixed image, they can use the rotating images. The js which does that assumes a certain path to the images, which in the case of the faculty pages is wrong. When all the faculty members have their own images, we can delete the banner_images folder from the faculty folder.
Clifton processed the images from their raw data files and I uploaded them and made changes to code.
I look forward to getting all of the markup consistent.
As for the titles, it makes sense to me to have a header tag rather than title for the texts from the Cabinet Satyrique.
The rest should probably have title tags, but I'll wait until the protocol is established to make any changes. These texts include: "Arrest des cornards"; "Fantastique repentir" (even though it appears in the 19th-c. Varietes historiques); also "Purgatoire"; "Reconfort des femmes";"Response des servants"; "Sermon des cocus"; and Varin.
N.B.:
"Arrest contre les chastrez": I've given it a main and a subtitle.
The Allard text (Gazette francoise) is too sketchy to be included on the website until I get back to see the original.