This is designed to build menus, banners, footers etc. I've worked on the menu one -- not really done yet, but it's the first time I've actually had to use tunnelled parameters. Pretty cool.
Category: "Activity log"
I have the XML working in the header of the standalone file, in all formats, and I've also been able to make use of that in the rendering of the Appendix in the XHTML5 output, which is coming along (although not really testable yet -- must set up that pipeline soon).
After starting work on the appendix processing for XHTML5 output, it became clear to me that there's a huge benefit in having TEI XML versions of the various citation suggestions per the various style guides, so I've started work on generating those and storing them in the notesStmt. This threw up all sorts of problems related to respStmts, including the use of orgs as e.g. authors, but omitting @type="org" from their name tag, as well as a stack of issues around anonymous attribution. I'm working on those now; meanwhile, I have the RIS, RefWorks and Chicago output working (as far as I can tell), and the others should be straightforward to write once these kinks are out.
Much useful progress in tagging people and orgs. KT and I did all of Coleman Street Ward.
AJAX fragments are now being created for location files. We've also devised a strategy to avoid the issue of date rendering problems, by turning the explanatory content into custom attributes rather than generating an explanatory block of HTML. Popups can then be generated on the fly where necessary (although appendix content can also be generated for document pages as planned).
Met with JJ and JT to make some decisions on the static build. Outcomes:
- We will build complete HTML pages with all resources, functional through pure linking, before JS intervenes on page load to add the interactive functionality and hide some of the resources.
- Resources will be organized in distinct lists (footnotes, bibliography, people mentioned, orgs, glossary, date explanations).
- We will implement the HTML first; then when we implement a print version in PDF, we are prepared to radically change the layout, perhaps replacing links with multiple systems of footnote numbering (footnotes numbers, people numbered p1, p2, etc., locations numbered l1, l2 or whatever). These will be different "editions" of the text, and this will be highlighted in the citation suggestions in each case.
- Fragment pages (bios, bibls, etc.) will be credited automatically to the MoEML team (TEAM1).
- List pages dependent on XIncludes (historical bibliography, for instance) will XInclude not only the content they need but also, in the header, the complete respStmts from the source page, allowing for proper attribution.
- People's contributions (in their bios) must also include contributions made by groups of which they are a member.
- The responsibility taxonomy will have its own page, and will generate sublisting pages (all the Authors, all the Encoders, etc.).
- DB-style documents (PERS1, BIBL1 etc.) will be rendered into full independent pages.
Laid out in an email all the factors which are pushing us towards the notion of a complete, coherent XHTML page including all its linked components. We'll make a final decision on this tomorrow. Meanwhile, after some discussion, I've switched to building the AJAX fragments directly from the standalone XML. I'm now handling most of the contents of these items, including lookups for corresp'ed person elements, but I'm not yet doing anything with dates, since that code is not yet written. That's to come.
The Jinks build should now fail on finding any duplicate @xml:id attributes. I've also been working on the static build for AJAX components, and realizing that I have to backtrack a bit and reconsider how fragments are processed; it's quite a gnarly thing, since they have to be wrapped (for AJAX purposes) and yet they have to have the same @xml:id, so that attribute has to sort of migrate up the tree in a way that makes me a little uncomfortable. I have people and orgs building too (although I've broken that somehow in the last hour).
Integrated JT's dating code into the static build, finding and fixing a couple of bugs along the way. After this, working on the AJAX fragments, I discovered that we had some duplicate xml:ids in the project. It turned out that the diagnostic to catch them was broken. I fixed it, then fixed the important duplicate ids (others can be dealt with by the folks dealing with packages for contributors), and then worked some more on the AJAX. I now have bibls working OK, and I'm starting in on people.
During and following the MoEML meeting, JT and I thrashed out an approach to the static HTML output which aims to be as future-proof as possible, bearing in mind best practices and the rise of the CSP. Basic ideas:
- Fragmentary resources will be compiled into .xml AJAX HTML fragments, for AJAX retrieval; and also into complete standalone pages for direct linking. The second will be generated from the first. They will include all the cited/mentioned lists for the item.
- HTML pages will initially link out to the standalone pages rather than incorporating their own copies of items. The only exceptions will be notes, which are inherent to the page, and bibliography items, because we must always provide a works cited list in any page; in this case, initial links will bounce down to the item in the works cited list.
- All links in the HTML must retain information (through @data attributes) on the type of item they're pointing at, for styling and similar purposes.
- At load time, a discrete JS module will process all of these links into popup calls (i.e. AJAX or copy-from-works-cited). This process will also add the relevant classes to all links (external or "internal") so people know when they're leaving the current page.
- All style will be achieved through external stylesheets (no internal CSS or @style). This means that each individual page will have its own accompanying CSS file (assuming it has rendition elements in its header). We will have to take care to ensure that the interaction of existing CSS, hand-created renditions, and auto-created renditions (from the XML standalone processing) results in the same effects in the output as was originally achieved/intended (so we'll have to take care with precedence).
Got the XML standalone style rationalization working, so now we're down to rendition rather than @style. Then did some basic work on Ajax fragments in the output. I'm realizing we'll either have to decide to reproduce the current HTML exactly, or we'll have to refactor carefully from the ground up. I'm tending towards the latter, but it needs some thought. We could make use of the @data- attributes to preserve more info from the input XML, and use that for more rendering control, which might perhaps make things simpler in the long run; but the short-term pain will be worse.
Some progress in normalizing renditions -- the renditions are working, but stage 2 (style attributes) is failing for some reason. Refactored the module so the build process doesn't have to spark up a new vm for each transformation.
I lifted a module from Mariage and rewrote it for MoEML, so that we can perform the same rationalization operation on the use of
<rendition>/@rendition/@style (basically, we convert inline styles into rendition elements in the header, cleaning up the transcription a little). The transformation is working fine, but for some reason it fails when run in the context of the ant build. Still looking into that.
Finished the moleebo and molebba private URI schemes, and then started work on rendering the AJAX fragments we'll need. However, a long discussion with JT convinced me I need to address the issue of the generated content (gazetteer and praxis index), since it's not being handled in an ideal way. We've decided to use
<divGen> in our standalone XML versions of the documents (i.e. replace the XHTML content with that), and then generate XML versions of the content as part of the standalone generation, and finally convert those to paged XHTML outputs. It'll take some figuring out, but it's the right approach in the long term.
I did a lot of work over the break and I've been pushing forward with it today too. I think all that remains is the moleebo: and molebba: prefixes. I'm still validating against tei_all; it's worth considering whether we should develop a customization of our own customization for the expanded files to use.
I've made a good start on the XSLT to create the standalone XML files.
There's a build file and some cleanup routines, with targets mapping what should initially happen with the XML output. Tomorrow we really start work.
Did some initial setup, svn tutorial, basics of finding files and looking at EEBO pages, etc. He's off and running.
We have a new workstudy, who will start work tomorrow. Paperwork done today.
The mdtList.xql lib was assuming that all blog posts would have div[@xml:id="content"], which we have sort of deprecated. Now fixed; it doesn't even assume the presence of a div (because some files have paras as direct children of the body).
KT and TL had got into a bit of a tangle with commits of the same named file in the blog folder. Sorted it out, and fixed some other problems in the process (wrong id on root element, svn properties not set on most recent files).
Today the webapp went into a tailspin several times, and had to be restarted, only to die again. It looks like hundreds of sessions were being started on the local host as soon as the app started up, freezing it. Eventually it seemed to clear itself, but nothing in the logs looks like it was causing this problem. We still don't know what it was, but presumably not a DOS or an ill-behaved robot. Something weird in eXist.
Just logging time spent over the last few days on helping JT work out the date processing system he's writing.
TL has moved all the existing proposed ids from the spreadsheets into the actual data, so I've cleaned out all references to the spreadsheets from the diagnostics and the next-free-id module.
JT and I worked on a system loosely based on what the TEI does, but actually simpler, to validate the content of egXML elements in our praxis documents. What happens is that each child node of an egXML element (assuming it's an element) is copied out into a separate temporary document, transformed back into the main TEI namespace; and each child text node is pushed out into a separate document wrapped in a p element; then our rng schema is transformed to add as start elements all the elements that occur as children of egXML (along with p); then the fragment documents are all validated three ways, using our new RNG, using our standard Schematron, and using the extracted TEI Schematron. This has thrown up lots of lovely errors in our examples, which JT is now fixing, before we make this process part of the main build.
I've also added jing to the utilities folder so that we don't have to depend on its being in a known location on a build host.
We should now be down to only graphics now.
Long meeting with some decisions made about dates an other stuff; cleaned up a bunch more of the old documents with respect to use of @style and missing quote elements; and added a couple more things to the build process, including JT's agasProgressReport.
There is an ancient survival of the use of @style in the classes documents for historical reasons, and this has given encouragement for the eruption of a plague of such things elsewhere. I've now purged the classes documents and fixed some other problems with them, and tweaked the Schematron a little to allow the use of a plain
<label> element for situations when someone wants an inline label (which shows up bold in the output) rather than a block heading. This is a good basis for purging the rest of this pernicious infection.
This was left out of the original process because it required saxon: parameters to control whitespace, but I've now rewritten it so it will run as generic XSLT2. It's being split and validated in the same way as the gazetteer, and that process is now more generic for future flexibility.
I've implemented a workaround for the problem of NVDL: I'm now splitting the Gazetteer into two separate temporary files, one TEI and one XHTML5, and validating each separately, then deleting them. It's the best approach I can devise for the moment. Once I did that, I found that the XHTML5 was not in Unicode NFC; I've now set up the two gazetteer-handling XSLTs to do that, but it reminds me that we should try to get NFC enforced throughout the data collection if we can.
I've hit a roadblock in my intention to use NVDL, related to the validation of nested fragments, and I'm waiting for a hexpert to give me a response to a dumb question I've asked; it turns out that it may not be possible to do exactly what I want to do, so I may have to resort to something more tricky (decomposing the gazetteer.xml file into separate files just for validation purposes). Very annoying. But in the process, I did find some oddities in the gazetteer generation code (such as unnecessary empty tables in the output) which I've now corrected.
I want to validate our more complex documents with NVDL (the gazetteer is the archetypal problem document here, with lots of pre-rendered XHTML embedded in TEI). I'm half-way through learning NVDL and figuring out how best to do this.
jsonlint is available at the command line in Ubuntu from the python-demjson package, which I've now installed locally and on the Jenkins server, and I've added a task to validate the JSON files produced by the build.
...on Jenkins to include generation of JSON and gazetteer materials; this provides an easy source from which anyone can pull updated versions when they need to, without worrying about scenarios etc. In the process, validated the diagnostics output, and found a lot of infelicities which are now fixed. Note to self: don't forget to validate HTML, even if it's transient output like this.
I previously used CodeSharing as a test port of a project to GitHub for the TEI Council, but I've now fleshed out the GH repo and rewritten some code, done a release, added a readme and so on. GH will now be the primary dev site for CodeSharing.
One of the important things we want Jenkins to do for us is validate all our XML and send an email to the last committer when a file is broken. There are several steps to this, and I'm working through them; this post will record how I set that up. Everything is run as an ant task in a file called
Regular RNG validation
This is handled by Jing, through its built in support for ant tasks. You use
<taskdef> to define the task, pointing at the classname Jing provides; then you invoke that task and pass it filesets containing the files to be validated. One additional requirement is that ant needs to know where to find Jing, so I'm passing it
-lib /usr/share/java on the command line.
schematron.com provides an open-source library called (currently) ant-schematron-2010-04-14.jar, which can be used in the same way as Jing; you create a
<taskdef> giving it the classname and classpath (I point directly at the jar), then invoke the task with
<schematron schema="../db/data/rng/london_all.sch" failonerror="false" queryLanguageBinding="xslt2">, again passing filesets.
Validation of embedded Schematron inside the RNG
schematron.com provides XSLT tools to extract the Schematron and convert it to a full Schematron file, so I'm using their ExtractSchFromRNG-2.xsl with Saxon to generate another Schematron file, then validating our tree against that.
Our regular diagnostics process is now quite sophisticated, and that's also running and producing an archived report in HTML format.
One minor improvement over the TEI setup is that I can store the log parse rules file directly in the repo, meaning that a fix to it is automatically inherited at the next checkout/job run. Right now I'm not doing anything useful in that file, but I'm sure as we continue to enhance our Schematron, especially with regard to bibls, we will need to suppress some specific messages.
Spent some time with JT refining the work he's done on pulling in bibls of our own documents, and also discussed more significant changes to the way bibls are rendered and popups work.
Created a job on the new build server in preparation for automating a bunch of stuff. Right now it's only running diagnostics.xsl, but it will eventually do a lot of stuff. I had to rewrite a chunk of diagnostics.xsl, because it was done in XSLT3 with maps, and we only have SaxonHE on the server, of course. It now runs under XSLT2, albeit a bit slower.
JT now has full power over the dev and site collections in the webapp, as well as full rw on the svn tree. All pending changes have been ported to live, and there's one remaining bug which resulted from them (order of names in popup bibl refs and cite-this-page popups, which are now forename surname instead of surname, forename. Leaving that as an exercise for JT. :-)
LINKS1 functionality has been enhanced through a change to the ODD and Schematron to require a ptr element child of linkGrp which points to a note element explaining the nature of the relationship. This allows better rendering in the GUI, which I've implemented on dev, and will port to live once the notes have all been added to LINKS1.
There were some old controller paths ending in _index.htm which were mapped from the old version of the application, but are now getting in the way (JT has a new file called praxis_index that doesn't work). I've now removed those obsolete redirects.
Nested orgs were not being rendered as individual pages because of the limited URL-matcher in the controller (they have ids that looks like HCMC1_1). I've fixed that in dev and live, and I don't think there are any side-effects.
Added more functionality to the demo (parallel zone outlines on both screens). The code is now pretty messy and ready for a rewrite; it's definitely just a pilot. The thing works, but only because of lots of try/catch things. Also tested the PPTX output from LibreOffice on the Windows VM, but it's horrible; that's not really an option. If we must use other people's hardware, we'll need to rework everything into PDFs that go point by point.
Implemented handling for the situation where orgs are nested inside larger orgs, and for the use of person/@corresp to point to a person in the personography from an org/listPerson. The processing goes like this: any org you ask for (on its own page, or through AJAX) will be processed with its own heading and note elements, along with a list of all its descendant person elements (from nested sub-orgs); however, the suborgs will not be directly referenced unless they are explicitly mentioned and linked in the note. Tested in dev and ported to live.
Tested the presentations on the projector, with my new laptop (unknown quantity, needs adaptors, has hidpi); tweaked a couple of things, but basically it worked OK.
JT now has upload privileges for XQL and XSLT into the eXist webapp /dev/ collection, and commit privileges in svn /db/dev/.
JT has been investigating the popup behaviour for topics and other types of article. As a result, we made changes to most of the XQL files as well as general.xsl to get the desired behaviour (links to topics behave like links to locations and generate popups, irrespective of whether they have abstracts or not; other documents link to popups only if they have abstracts). Tested on dev and ported to live, which caused an error because of the special redirect we have in place in agas_ol3.xql on dev, due to that URL getting out into the wild).
Finished merging KMF's material into the map presentation, and then revised and tested it. Completed my own presentation on interchange, and in the process found and fixed a couple of bugs in our XML rendering (we have two distinct stylesheets doing very similar things; some of that work must be modularized).
Wrote most of the presentation with GN, and discussed with KMF who's provided lots more material we need to include. Worked on my own presentation too.
Did a short workshop on regexes for MoEML and other folks; the materials are in the MoEML repo. Also went to SB's DHSI class for a bit to answer Qs on Oxygen.
The side-by-side thing is working now, and we have more layers to show. The remaining thing is the display of source and responsibility information for the apparatus entries, but I have a good idea how to make that work.
Got the map setup working properly, with nice functionality outlining zones of interest. Also created a lot of new graphics for the interchange presentation, which still isn't really finished.
Planned a simple 10-stage outline/worksheet thing for tomorrow's regex workshop.
For the map presentation, we need a demo of side-by-side rendering. I've got the basics working:
- A framework HTML file which presents two linked map renderings.
- Our map rendering on the left, and other maps on the right selected by a drop-down selector.
- Some basic CSS.
- XSLT to render JS objects from the zone elements in the apparatus file.
- An ant build script to build the project and push changes to the hcmc web space.
The next stage is to implement zones on the map and zoom to them when specific sheets are selected on the right.
Things learned: still can't use PNGs in Zoomify layers; can't zoom out below the lowest zoom level.
I now have the Praxis document working in Simple (lots of extra stuff to handle egXML and documentation elements not allowed outside the header). Made more progress on the presentation too.
As responses come to my bugs on Simple, we'll need to make small changes, but it seems to be basically working.
I've got Simple output pretty well done, except for some obvious bugs in Simple itself (e.g. listPerson is allowed but not person). When those are resolved, there'll be a few more workarounds, but we'll basically be there. This has gone a little faster than I thought it would.
GN and I worked out a basic process for building a test map which initially shows the cleaned-up Margary sheets, each as a separate layer, on top of the main map. I've generated the Zoomify tiles, and I'll start work on the test code tomorrow.
I've started writing a converter for TEI Simple output, as part of testing for TEI Simple, but also as part of the prep for my DH paper on Interchange. The interesting bits are likely to be the CSS conversions to simple:whatever @rendition values, but for that I can leverage some of the code I've already written for Mariage.
On our map, we need to mark polygons, points and lines. The TEI
<zone> element is officially defined as marking an area:
"defines any two-dimensional area within a surface element"
which is arguably a polygon only. I've been dealing with points by creating tiny one-pixel zones using the old attributes designed for rectangles (@ulx, @uly, @lrx, @lry), but we've been rather abusing the @points attribute intended for polygons to handle lines (paths), with a convention that if a list of points does not repeat its initial point, it's not closed, and is therefore a path. I tried to get that officially sanctioned and change the definition of
<zone> and @points, but was not successful (https://sourceforge.net/p/tei/feature-requests/541/), and they've asked me to prepare a feature request for a
<path> element instead, which I fear will not be successful (the feeling is that this crosses too far into SVG territory).
So we are now officially engaged in some mild tag-abuse. The suggested fix was that instead of a series of coordinates creating a line, we should have a one-pixel-wide shape instead; that would be functionally identical to a line, but would really be a zone and would therefore be OK. So I'm now providing this in the XML we serve up (the "Standard" and "Standalone" variants) by expanding the attributes on the fly, so that we're providing good TEI to anyone who wants it, but we're not making our lives needlessly complicated for the sake of purity.
With JJ's approval, migrated the CSS changes for off-page links from dev to live.
A bug was caused by link-generating markup in titleStmt/title, so I've removed that from Stow (the only place it was), and added it to Schematron, as well as some other minor Schematron additions suggested by JT.
Investigating oddities in the rendering of the historical personography, I found a bunch of typos, some of which look like they might have resulted from automated markup processes. Wrote to TL to see if there's anything in the spreadsheet code that might account for them. Fixed a bunch, but there will be more.
JT found a bug: when you click on a name in the name list in the gazetteer, it retrieved all mentions of that place, not all mentions of that place with the spelling you chose. That's now fixed, tested in dev and ported to live. Just a typo in XQuery.
With the team's approval, I've ported the print stylesheet stuff I wrote last night on dev to live. I've also implemented most of what needs to be done for internal bibl refs (ref @type=mol:bibl); remaining: no links for unpublished docs, and warning of that status, and probably some re-formatting of the date.
Updated my Windows 10 VM (painful) and tested the map with the Spartan browser. All OK as far as we can tell.
Fixed some nasty bugs in the current video handling for embedded YouTube videos. Also began work on standard ways to handle non-YouTube content, such as CBC, but that's not working properly yet.
JT finished his code last night, and we looked it over this morning and discussed future plans for useful XSLT over the summer, including the possibility of writing an Oxygen ant task to import contributor work from docx and convert it to a base TEI file for correction and enhancement. This would cover lots of useful ground: ant, XSLT, ant scenarios in Oxygen, word-processor document formats, and lot of other meaty things.
Spent some time working with JT, who's beginning his XSLT learning process, to get a sort of diagnostics stylesheet started which will identify good candidates for location on the map.
Sunday night and today, replaced the old map with the new map in all contexts; changed the landing page for the map to a standard MoEML document; removed "experimental" from everywhere it appears; fixed the gazetteer generation code along with page and mdtList rendering code to remove links to the old map; and cleaned up a host of other stuff.
I've also put in a temporary conditional redirect (with an override I can use for my own testing) that sends the dev version of the map to the live site; the dev URL got into the wild, unfortunately, so for a while we'll have to do that.
Tasks arising out of meeting on Wednesday. The nextFreeId/location template is now live, and I've also added a status column to the A-Z index page.
...for the Agas map locations. This can be extended to real geos if necessary, but let's cross that bridge when we come to it. We still have to figure out how to render these elements on Agas.
I have this working now, and it's tied into the nextFreeId system. Waiting for comments before porting to live.
With input from TL, I'm creating a further stage after nextFreeId which will generate a location file from data entered into a form, based on a template. The form is set up and working, and I've spent some time experimenting with embedding the template as XML into an XHTML script tag, following documentation on the MDN site, but it seems like it will be difficult to treat this data both as XML for the purposes of injecting it and as text for the purposes of doing search-and-replace to insert the data. May have to resort to generating the file on the server and returning it to the user.
I now have a complete auto-markup process tuned and tweaked, and it also handles restoring (virtually all of) the lb and hi tags which are removed initially to facilitate name recognition. Very neat, and it's basically ready for testing to find out how accurate it is. I've built in some rudimentary timing too, so I'll run it tonight to see how it does.
Working around the problems with hi tags and with lbs in the middle of words is proving extremely tricky. I have a transformation now running over the weekend, and we'll see if it completes with useful results. Regex is now well over 600K.
One major problem we have with adapting the procedure used for MoM to MoEML is that in the Stow 1633, many names, but more frequently parts of names, have been tagged with
<hi> (no attributes), to signify that they are blackletter in the original. This would disrupt our tagging capabilities, so this is what I propose:
- Identity transform which replaces opening no-att hi tags with → (right-pointing arrow followed by space), and closing tags for same with ← (space followed by left-pointing arrow).
- Named entity regex construction code includes the two arrow characters alongside spaces as delimiter in a character class for each regex fragment. This means they will not prevent matches (assuming they wrap at word-boundaries, which is the norm).
- Text with arrows is tagged by identity transform as planned.
- perl search-and-replace puts the
Potential issues include the last phase, where we might get overlapping tags instead of clean nesting. We'll have to see if that happens; if so, the perl process might be able to fix it, or a subsequent processing step might.
I've created XSLT which harvests all unique links to named entities in the existing codebase, and constructs a reference list, along with an almighty regex (267KB) which can be used for matching and tagging. This is based on the MoM work, but more sophisticated (handling space issues, long-s variants, and similar problems, automatically). The next stage is to test this on Stow 1633.
Basics are all in place; JT is working on the last fixes.
GN and I have come up with a working prototype for a simple XSLT approach to automated entity tagging, based on a known database (like a gazetteer). Essentially it involves constructing an enormous regex in memory; that seemed unlikely to work, but it proves very effective with 800 or so names. We'll scale up tomorrow to see how well it might work with MoM, but also for linking placenames in ISE texts to MoEML.
...but before that can happen, the msDesc ids will have to be changed to XXXX1-style short ids.
JT and I have collaborated on this. Still some cleanup to do, and I need to implement individual pages for these msDescs, but basically it's working.
New images from GN which can be tiled; all surface and zone elements created.
Gathered together some of the original source images (more processed versions still to come from GN), created JPG versions and began building the facsimile and crit app document we will use for examples to add to the paper, and to create the eventual website. Some changes to schemas to allow for new elements and features.
JT is working on integrating BM's map research bibliography as a TEI file, for which we're using msDesc and friends, so I've added the required elements to the schema. We also have new values for bibl/@type and ref/@type, bringing more messiness to valLists that were already highly questionable; these need to be revisited carefully at some point.
Ahead of tomorrow's map-a-thon, set up a Google doc with links to streets that are so far unidentified on Agas.
Fixed a bad category assignment; tweaked the XSLT that generates the conversion status report.
As planned, I've implemented support for the display of MultiPolygon geometries for locations on the map, and documented the way to encode such geometries in the XML file.
In accordance with our recent decision to stop favouring authors of blog posts by putting their names more prominently on those page than those of other authors on other articles, I've now carried out these steps:
- All blog post xml files have been renamed e.g. BLOG1.xml. Redirects are in place for the old URLs.
- Author names have been removed from bylines.
- The Author column has been removed from the Blog Posts mdtList page.
- Links have been updated elsewhere.
- Documentation has been updated to reflect these changes.
- Some old files (drafts of published posts) have been removed from svn.
With JT, KMF and JJ, worked out some details of the msDesc structure, set up the basic transformation parsing the CSV data, and started on the output.
After OK from team.
My previous implementation of Google Maps involved a small map embedded in the page, along with a link that shelled out to the main Google Maps site with our KML as a URL parameter. This is no longer going to be supported by Google Maps, so I've rewritten it so that you can just expand the embedded map to take over the page, and shrink it again when you need to. Took a bit of figuring out how to make the viewport resize, recentre and reconfigure -- reminder to self:
Built GN's UI font into the system and rebuild the map toolbar buttons with it; more elegant, I think. Also added a keyboard zoom feature (not working on Chrome), some better accessibility support for keyboards, and fixed a couple of bugs; then ported the results to live.
After that, I worked on a mouseover-indication feature which is highly speculative, but sort of works OK.
This uses canvas.toDataUrl() and the anchor element's @download attribute, and works well on the browsers that support the latter -- which doesn't include IE or Safari yet. Oh what a terrible shame, eh. Also added a button to clear the canvas features; I had the function already but didn't have a button for it.
Did the intro/training and location mapping session, and we polished off quite a few of the remaining streets. Some interesting new problems emerged too.
Updated all the documentation, published it, and created a handout for our session tomorrow.
Met with JJ and KFM to plan our approach for Wednesday:
- We'll start with a basic "how to draw a shape" presentation, then go through how to put the XML into the file; then we'll provide each person or pair with a subset of the data from the Streets page to convert from the old location info (star on map tile) via the A-Z and BHO info to a shape on the new map.
- Periodically, we'll refresh the JSON so we can see how we're doing.
- When that's done, we assign each person a quarter of the map to look at, to see if any remaining streets not identified can be.
- Parishes should be assigned to JT.
- Wards will be kept aside for the moment.
- Next, we go through all locations which have geo info but no representation on the map, and attempt to add these (low-hanging fruit).
- Finally, if there's time, we try to identify any remaining locations we can on the map (churches, halls, wharves, other obviously distinct features).
After extensive testing, it looks like recent updates to Open Layers made the FeatureOverlay object a bit flaky. Since I don't really need it, I've rewritten all the highlighting code to get rid of it. This took a few hours and some serious testing, but the result is a bit cleaner, simpler and more maintainable.
This apparently-stable version of the map has now been ported to the live site as the "experimental" Agas Map, so I can keep working and breaking the dev version while people use the live one.
I've finished preparing the map for a live beta, so people can test and work with a stable version while I hack at the dev one. I've set it up so that it integrates nicely with the old map, so that where a location has a new map shape, that's what we use, but where there isn't one but there is data for the old map, that gets used instead. Waiting on approval to move that over. IN the process I found and fixed one more bug caused by an API change in Open Layers.
I've written half of a new search system which allows fine filtering based on the document taxonomy. I have the nesting/expansion and checkbox status stuff all working on the search page, and the right cats are being submitted to the search; the next stage is reconstituting the category choices in the page when it's reloaded, which is not easy. I'm also still using the old searching code, which I'm unhappy with (it's slow). I had one shot at optimizing it through using eval, which didn't help much, but now I'm thinking that if I re-order the predicates so that the text search comes first, and then the document filtering is done on a much smaller set of documents, the indexes should work better.