Lots of custom CSS required, but it seems to be working OK.
Spent a fair bit of time cleaning up silly filenames in the images (spaces, commas, ampersands, apostrophes, braces). I also wrote a diagnostic in Pythong that's now part of the build process, which identifies cases where images are linked but do not exist in the repo where they should be. Following that, I polished off the 1821 content, which had some idiosyncracies that required extensions to the schema.
BT in the dean's office asked me to post audio recordings of candidates for the chair of Pacific and Asian. He wants access to those files limited to the hiring committee. So, I created a subfolder in hcmc/media called paas and put an htaccess file in there restricting access to specific netlinks. I also put the m4a and ogg audio data files in there.
The home page at hcmc/media now is used to host links to specific competitions. In this case, the chair of PandA.
Each candidate gets an image file which you can easily generate from the AudioImageTemplate.jpg file on the site. Name of candidate is 96 point and date is 48 point, both centered.
Had to special-case the treaty documents in the XSLT linking code, as well as renaming and re-id-ing them all. Now working OK. Updated taxonomies and other content too, to make for better listing of documents in the contents pages.
Wrote some XSLT to convert PG's Filemaker bibliography db, exported as XML, to RIS format, which can be imported into Zotero. There may be more tweaks needed, but so far so good with all 480 records going in without problems.
Much progress today:
- Pressing the red triangle while editing a content document now builds the site, and then opens it at the page built from the content you're working on.
- We now have handling for footnotes and transcribed poems.
- AR is more than half-way through converting the 1816 documents.
- I've cleaned up a lot of the errors in earlier conversions, and in the incoming files, which I've Tidied, transformed, and then been hacking away at in advance of her reaching them.
- The schema now does more in the way of useful constraint than before.
I've set up an instructions page which also doubles as the site build launcher, and filled in some basic instructions, but there's a lot more to do there. I've converted all of the old content from the timeline into the half-fixed HTML format with Tidy and my converter, and I've already ported all the chronology divs into the content folder, although there are some spacing issues that will need to be fixed. There's one remaining problem with the build process for the later years, where all articles for all months are being linked on all pages, but I should be able to fix that tomorrow before AR gets started.
Discussed issues of layout and navigation functionality with KB; arrived at some conclusions, and implemented the results. Came up with a simple approach to image scaling on click, which involves a custom data attribute and pure CSS. Implemented all the decisions, fixed a bunch of image links and filenames in the first batch of imports, and got a whole pile of stuff working properly. Coming along nicely.
Discussion with PG about the conversion of a proprietary Filemaker bibliography db into Zotero. Determined we can do it by taking the XML output, XSLTing it to RIS, then importing. Will do the work on Monday.
Got all the links in the timeline working, so that all the timeline content will be accessible once it's all converted. Then used Tidy to convert one of the old files:
tidy -output timeline_1795_1815_TIDIED.html -asxhtml -clean -gdoc -numeric -utf8 timeline_1795_1815.html
to pre-process the first document so that it's amenable to XSLT, and then gradually hacked away at some XSLT until I had output that I could simply copy/paste into the content documents, with only one or two errors to fix by the end. This is good progress; tomorrow I should be able to do a whole new year.
I also looked at the image problem -- vast numbers of them, in various formats, with dupes and horrible filenames. First of all, I made the build code copy only the images which are actually linked from the documents being processed. This works nicely to clean up the output, but we should also be able to use it to clean up the contents of the repo by identifying images which are never linked. I don't want to add them all to the repo and then prune them later, so I'll work in stages, having the image listing code produce an svn add command list I can run as I add new content and new images are used. Then for the larger versions, they can be done manually.