- Wrote Billiter Lane essay
- Encoded and posted Billiter Lane essay
- Met with Prof. Adrian Kiernander, visiting scholar from Australia
A new essay on "Billiter Lane," by Janelle Jenstad (with research contributions from Morag St. Clair, Undergraduate Research Scholar), has been posted to The Map of Early Modern London.
I inserted the xml:id attributes for file names and the TEI version number into the headers for the NKS 1867 and SAM 66 xml files. I will do this for the SKng files tomorrow.
I added the affix 01 to ids for gods, giants, persons, monsters, artifacts etc. in NKS 1867 and SAM 66. There are some duplicate names in SKng and more duplicates will likely turn up in the long run.
I realized that I had used a different idno (i.e. SAM66-f.73r) rather than the correct idno (i.e. SAM66-073r.) in the files for the two manuscripts NKS 1867 and SAM 66. I renamed the files for NKS 1867 and SAM 66 in order to be consistent and entered the corrected xml:id file names into the files. This is not a problem in the SKng files.
I proofed the NKS 1867 and SAM 66 files and corrected typos while I was making the changes listed above.
I resized 79 images from NKS 1867, SAM 66, and SKng and then cropped them to 100 x 100 and saved them as thumbnails.
I inserted the xml:id attributes for file names and the TEI version number into the headers for the NKS 1867 and SAM 66 xml files. I will do this for the SKng files tomorrow.
I added the affix 01 to ids for gods, giants, persons, monsters, artifacts etc. in NKS 1867 and SAM 66. There are some duplicate names in SKng and more duplicates will likely turn up in the long run.
I realized that I had used a different idno (i.e. SAM66-f.73r) rather than the correct idno (i.e. SAM66-073r.) in the files for the two manuscripts NKS 1867 and SAM 66. I renamed the files for NKS 1867 and SAM 66 in order to be consistent and entered the corrected xml:id file names into the files. This is not a problem in the SKng files.
I proofed the NKS 1867 and SAM 66 files and corrected typos while I was making the changes listed above.
I resized 79 images from NKS 1867, SAM 66, and SKng and then cropped them to 100 x 100 and saved them as thumbnails.
I have roughly 200 xml files left in 1858 for PB-tagging. So, over 400 completed already! I forecast roughly 4 days to finish the rest.
Caitlin will use up the rest of her hours on time, and this should leave us with a much-improved vessels database. I will work with her next Friday to tune it up for publication.
A general remark on the use of Wikipedia: although it's sometimes necessary to corroborate from a second source information obtained from Wikipedia, I am not against using it, first because it often has more complete references than other sources. Christian Vandendorpe's March lecture on Wikipedia also loosened up my stance.
That said, the inherent instability of Wikipedia does make me queasy. In deciding whether or not to use it, either as the exclusive source for a reference or as one of two or three, is context. The main thing I'm going to be checking as I go through the references is their relevance to the contexts in which they are used in our anthology. Fortunately, the ability we now have to link immediately to the texts allows us to be aware of the information we need. For example, we can't in a reference to Hercules include every myth associated with him, so we must choose our commentary based on the context(s) in which we're working. Often, Wikipedia provides relevant detail that may be difficult to find elsewhere. When it works well, please go ahead and cite Wikipedia.
Out of the office for a bit to take my bike to be fixed, and then left a little early.
Created a basic application framework, based around a home page which is also the search page:
- / forwards to index.htm
- index.htm calls search.xq. It may or may not provide a parameter "find" (usually not).
- search.xq provides either search results only, or (if there's no "find" parameter), all items as results, along with a welcome.xml bit to be used as a "splash".
- search.xq is processed through site_page.xsl, which will provide all the basic stuff (style and JS linking, titles, search box, footers etc.). It then depends on search.xsl to process the search results into the two main components (image band and alphabet menu) used for browsing images.
- site_page.xsl also depends on tei.xsl for rendering of basic items, and on strings.xsl for paths to images, captions, and so on.
- If search.xq is called directly by the browser, this is assumed to be an AJAX request, so the results of it are processed only by search.xsl, sending back a new version of the search result browser controls to the page for insertion.
- If item.xq is called, a single item is retrieved by id, and processed through item.xsl, for display in the main content area of the page. This would be invoked by the user clicking on one of the images in the banner across the top.
I've built this basic framework, and started on the XQuery and XSL. What's explained above is of course much easier to understand if you just read the sitemap.xmap file. That's as far as I can easily go for now, because I need from PAB:
- Square (100x100?) images for thumbnails of all the images in the set.
- @xml:id attributes to be added to the document files.
Note for myself and Trish: in addition to images, biblio, and names, we also need a collection called "docs" which contains the actual document images. We don't want them to sit in the root collection, because it'll be a bit harder and more time-consuming to search only those documents. I'm writing my XQuery and XSLT on the assumption that this is the structure.
Trying to finish prep for Dublin...