At CC's request, normalized the float settings of page numbers in the Ville Thierry; they were (mostly) thw wrong way round. There's still a minor issue with left margins for verso page numbers, but I think that can be fixed in the static build.
Category: "Activity log"
TOC page generation in the static site was a bit crude, and lacking some features from the current site. I've now rewritten it, after some consultation with PS about tablet devices, so that there's an "information" column added to the table, which allows you to bring up the teaser block with a click, in addition to the mouseover event that only works with a mouse. I've also rewritten the sort functionality so that it can use a sort key stored in @data-sortKey instead of its text content; created a sort key generation system that correctly handles leading articles, punctuation and whitespace; and also fixed a bug where a biblio reference in a TOC page intro became a non-functional link. I think TOCs are fully working now.
Met with Martin (Tuesday 22nd) and discussed the front end design for the Mariage site.
I've started a design for a site banner/logo, my idea being to use modified historical drawings from the site, specifically the man and the woman made of household items.
We agreed the site would support only modern browsers (standards) and would be a mobile first design.
My desktop is running Ubuntu 15.10, which has a newer (but not the newest) version of vips, containing a bug: it places the ImageProperties.xml file in gravures/imgxxx/imgxxx/ folder instead of one folder up, where it should be. Had to add a trap and fix for this in the Python script that calculates the tileset's highest zoom level.
The results of the crawl from Friday were briefly available to me on the Wayback Machine over the weekend, but now seem to have mysteriously disappeared. I've tweaked a bit and run a second crawl, having learned to exclude external URLs (which were being followed previously). That crawl has completed now, and presumably will show up in the Wayback Machine at some point, ready for testing.
Met with CC and discussed our progress; she's working on the diagnostics output and proofing right now.
I've completed the tiny site build, and included XML and AJAX examples to cover the full range of what we have in the site. I tried running a test crawl from Jenkins, but this appeared to fail after finding only two documents, and it looks as though there's something blocking Jenkins URLs. I've put a copy on a regular server and crawling that to see if it makes a difference.
Today I got the basic transformations and filesets organized for creating the tiny site. I have a subset of HTML files identified which are transformed to remove links to non-included files; I have a subset of images being copied over; and I have a working interface for the tiny site. Still to do:
- XML documents for the HTML files which have them as direct sources need to be included in their own includes file.
- The home page needs a special transformation that announces what the purpose of the tiny site is, in case it gets loose.
- The tiny site ant task needs to be antcalled in the default build process.
After that, I think we'll be good to go.
The build was broken because the wonderful NVU validator was finding errors in the HTML5 output that originated in bad attribute values in the original XML (@xml:lang and @target values mainly). I've now tracked down all of those and fixed them, and also caught some other problems with head elements inside lists. The build is now working again, and I'm now writing XSLT to generate a really stripped-down version of the website, by transforming the existing output, designed for rapid crawling by the archive-it crawler.
I began with the assumption that I could just pull in every linked reference into the XML of a specific document, and thus create complete coherent docs; but this is not the best approach, because references link to each other recursively in this project; one might very easily end up with the entire reference collection embedded in many documents. Therefore I've created an indexing system that builds an index to the 6,000+ items (references, biblio items and tile images) that are not explicitly linked because they're accessed through JS; and I've included an invisible link to that file in the footer of most of the front pages of the site. CD's crawler is now working on this version of the site, and we'll see if it does the job or not.