...to help PS as he works on the site design, and to remove duplicate title rendering and use truncated modern titles where appropriate.
PS and I worked through a list of changes to the XHTML5 generation and made some additions to content to support the ongoing style development. PS is now going to be working directly in the svn repo. We have some questions forwarded to CC and awaiting responses.
At CC's request, normalized the float settings of page numbers in the Ville Thierry; they were (mostly) thw wrong way round. There's still a minor issue with left margins for verso page numbers, but I think that can be fixed in the static build.
TOC page generation in the static site was a bit crude, and lacking some features from the current site. I've now rewritten it, after some consultation with PS about tablet devices, so that there's an "information" column added to the table, which allows you to bring up the teaser block with a click, in addition to the mouseover event that only works with a mouse. I've also rewritten the sort functionality so that it can use a sort key stored in @data-sortKey instead of its text content; created a sort key generation system that correctly handles leading articles, punctuation and whitespace; and also fixed a bug where a biblio reference in a TOC page intro became a non-functional link. I think TOCs are fully working now.
Met with Martin (Tuesday 22nd) and discussed the front end design for the Mariage site.
I've started a design for a site banner/logo, my idea being to use modified historical drawings from the site, specifically the man and the woman made of household items.
We agreed the site would support only modern browsers (standards) and would be a mobile first design.
My desktop is running Ubuntu 15.10, which has a newer (but not the newest) version of vips, containing a bug: it places the ImageProperties.xml file in gravures/imgxxx/imgxxx/ folder instead of one folder up, where it should be. Had to add a trap and fix for this in the Python script that calculates the tileset's highest zoom level.
My second crawl, initiated yesterday, now shows up in the Wayback Machine, so I was able to test. The first thing I discovered was that a JavaScript hack whose objective is to work around IE's tendency to cache AJAX calls, by generating a random query-string to be appended to a URI, was causing those calls to fail, even though the file was actually there. This suggests that query strings generally may be an issue. In any case, since the purpose of a static site is to be static, there's probably no need to worry much about AJAX caching, so I've commented that bit out, and we'll see if the problem is solved on the next crawl.
However, a much bigger problem was also revealed: the Wayback Machine purposely mangles JavaScript, in rather nasty ways that ensure that (for instance) OpenLayers is completely broken. It does crude but understandable things like replacing URIs with Wayback-Machine versions of those URIs -- which itself is destructive, because it does this even for URIs used as namespaces. But even worse, it replaces calls to JavaScript functions with its own versions of those functions, even when they are functions called on objects about which it knows nothing. It replaced calls to postMessage with its own WB_PostMessage, causing the complete failure of the OL-based functionality for the gravure in the test site. I've posted a support request about this; at best, hopefully there's a way to prevent any mangling of specified JavaScript files; at worst, perhaps we can act as a test case for bug-fixing that code so it doesn't do this kind of damage.
The results of the crawl from Friday were briefly available to me on the Wayback Machine over the weekend, but now seem to have mysteriously disappeared. I've tweaked a bit and run a second crawl, having learned to exclude external URLs (which were being followed previously). That crawl has completed now, and presumably will show up in the Wayback Machine at some point, ready for testing.
Met with CC and discussed our progress; she's working on the diagnostics output and proofing right now.
I've completed the tiny site build, and included XML and AJAX examples to cover the full range of what we have in the site. I tried running a test crawl from Jenkins, but this appeared to fail after finding only two documents, and it looks as though there's something blocking Jenkins URLs. I've put a copy on a regular server and crawling that to see if it makes a difference.
Today I got the basic transformations and filesets organized for creating the tiny site. I have a subset of HTML files identified which are transformed to remove links to non-included files; I have a subset of images being copied over; and I have a working interface for the tiny site. Still to do:
- References and medical terms pages link to XML fragments by AJAX. We need to select a subset of those (maybe the first three in each page) and make sure they're kept, and their associated XML fragments are copied over, but the rest have their JavaScript AJAX links suppressed.
- XML documents for the HTML files which have them as direct sources need to be included in their own includes file.
- The home page needs a special transformation that announces what the purpose of the tiny site is, in case it gets loose.
- The tiny site ant task needs to be antcalled in the default build process.
After that, I think we'll be good to go.