With TEH, got a basic XSLT set up to convert the XHTML static output to something simpler for ePub. She's now working on that conversion, and meanwhile I'm adding templates which create all the additional files needed for the ePub creation. I've also spun out the ant tasks for this into a separate file, which will eventually handle all the ePub work.
TEH's experiments with ePub have shown that one source of invalidities is the remaining scattering of vendor prefixes throughout the CSS, all of which are probably now completely unnecessary, so I've removed them all (with the exception of those in the Open Layers CSS, which is better left alone IMHO). This should clean things up a bit and help with getting decent ePubs. Checked with PS before removing them.
With TEH, worked on the ePub project. She's building a test file to see what will work and what won't from our current files. I've added automated download/update of the epubcheck jar file to our static build process, and tested that. I can't yet see a way to pass a batch of files to epubcheck, so it may be the usual one-at-a-time process, but with luck it shouldn't be that onerous because (I think) ant will allow us to avoid instantiating a new vm every time.
Now that Jenkins is able to run the MoEML build again, I've been working on the fallout from the imposition of the new Schematron rules which aim to reduce the number of redundant hi elements. I've had to exclude Stow documents from those rules, because there are too many existing instances awaiting attention from editors who will re-encode them anyway. Fixed a few dozen errors spread across the rest of the collection.
Discussed some issues of CSS with LS who is making lots of progress on the mayoral pageant CSS; we really ought to get some of the common problems/issues documented in Praxis.
Also checked over CH's work in putting xml:ids on the divs in 1633, which looks good. Now we have to do some bigger regularization steps, including adding line beginnings where we can and some other small clean up stuff.
TE's work on teiHeader has highlighted the impoverished content model of projectDesc, so she's submitted a feature request to enrich it. Both JT and I helped work on it.
Finished the first draft of this presentation. I would like to get the image upload feature working in BreezeMap before doing the presentation, but that's not essential.
Met with MDH today to discuss the plan for putting Stow back together. Most of the big concerns were regarding rendition elements and pointers. I've been able to implement the first bit of the plan today: getting the renditions rationalized. It's a multi-step process like so:
- Create an XSLT based off of each include (i.e. find all the documents with mol-include, resolve those includes, and then process each of the included documents). This uses pre-existing code for the standalone.
- Copy over the XSLT and its corresponding document to a temporary directory
- Use the pre-existing for-each target to apply the XSLT to the XML file (it takes longer than I'd like, however, but I don't think there's anyway to make it go faster)
- Now process the files that have the includes and bring in the stuff that's meant to be included; since all of the rendition selectors are processed, it's simple to get the style values from the rendition in the header.
I'm currently doing this after the original build, but I don't know if that's the right time to do it; the issue is that it breaks if you try to do in quick and subset (since there are now no documents that have to be resolved); this isn't great if you're trying to rebuild the reconstituted Stow in a smaller build process. I'll have to investigate that.
The now-fragmented Stow document needs to be reconstituted in XML form, even though we don't want or need a complete version in HTML. After JT's successful work splitting out the source files, we planned out in detail the steps necessary to reconstitute a complete XML version for the "original" set, from which the other XML versions will be generated. It ain't simple. It took two hours.