We now have copies of the server software, the desktop, and the desktop interoperability extension, along with one license for the desktop and one for the server, courtesy of RS, and we're testing installation and connection of them. It's not simple. It'll take a while to figure out the software.
I've consolidated the latest changes into the live db and tested very briefly. I've also considered, and rejected temporarily, the idea of creating something called a "property" which amalgamates lots which share a title; this could be done automatically at some stage anyway. I've written some simple maintenance scripts, including the one which copies the live database content to the dev db, so I'm now in good shape for dev and testing.
It's increasingly clear that titles-to-lots needs to be a one-to-many relationship; more than six hundred titles have additional lots mentioned in their "Doc lots" field. I've been working on how this might be managed, using the dev version of the new db. I have a script which does a number of things to the db structure:
- Creates a new titles_to_properties table and populates it with records for all the existing title/property one-to-one relationships.
- Adds a new prp_desc field to the props table. This is necessary because it's not practical to use the props_abbr view in a one-to-many relationship; instead, we can dispense with that view and replace it with a description field.
- Populates the new field with existing data calculated from the block and lot fields (this may have to be more detailed, using other fields, since we'll actually be looking at non-Vancouver properties down the line, but it's easy to expand it).
- Adds triggers to update and insert which calculate this field before updating or inserting.
I've also updated the local_classes.php file to take account of this. Quick testing suggests this is working well (haven't tried an insert yet, though).
Once this is working in the live db, someone will have to go through and update all those hundreds of existing records, which will be very time-consuming. In most cases, new property items will have to be added to the props table.
Urgent requirements are arising every two minutes on this project at the moment, so we've spent much of the day trying to research and organize how we might get ArcGIS running and integrate it with the rest of the project, as well as providing half-finished untested tools to the RAs, and work out backup strategies. Waiting on netlinks being assigned to all the RAs before we can get the project workspace set up and working; meanwhile, I've created a preliminary document to let everyone know some basic rules around file and folder naming etc., pending expansion of this to include uploading files etc.
Under pressure to show stuff to new hires, rustled up dev and live versions of the new db, and did a couple of tweaks to the Windows ImageMagick script. Turns out the parameter error I'm seeing on some machines is a result of ImageMagick not being installed -- it then defaults to the "convert" command the system uses for something else entirely. Must trap that error better.
Also discussed data collection and processing tools with the new hires, and did some research into ArcGIS, which the GIS folks want to use.
I have some XSLT half-working to produce the property structures we want to use for our queries. In the process of doing this, I've fixed (I think and hope) a little bug in oddbyexample, and extended the odd file a lot. I'm beginning to encounter all sorts of oddities in the DB source data, including annoying notes in code fields, as well as what look like genuine inconsistencies in the data (lots numbered "B", which might or might not be a typo for "8", and so on). All this will help clarify questions for the land title folks next week.
Having stripped down the original Properties db, I've now rebuilt the web interface for it (dev only), and tested it on our HCMC servers (db server and web cluster). I have an XML dump of the db, and I'm going to start building XSLT to generate data conforming to the schema I created last week; that will be the basis of research enquiries, because it will have transformed the title-oriented relational data into property-oriented XML documents.
Talked with Beth after sending her draft of objectives for first six months. She knows of Japanese directory sources, but isn't sure about English - so will contact JSR to see if he's got a good idea, and if not, will get students to spend first week or so canvassing the possibles and then we'll settle.
She confirmed that pre-war Japanese language is tricky, particularly names, so part of the project this summer will be figuring out how advanced one's Japanese needs to be. We may end up doing conventional Japanese by one person and complex Japanese by another (e.g. names, or data in earlier sources vs data in later sources).
Some of the data exists only in analog form, so her student and/or one of the UVic ones will have to spend some time generating image files from those source artefacts.
She also mentioned the Kobayashi Geneology, which is the raw CSV tables of a defunct database, as something we might want to take a look at to see if if provides enough useful info to justify processing it into an XML data structure for use in the project.
She'll update the milestones document and send it back to me.
Installed MariaDB on my local machine, anticipating that it will be set up on the new server, and worked with a dump of the properties db. Stripped it down, removing the views (except for props_abbr, which is a convenience view for the web gui), and did a little testing. Had one problem that took a while to figure out: the dump also included the user which had defined the views (hcmc on lettuce), and this was pulled back into the db and prevented dumping of the stripped-down version; I had to delete the view and re-create it with the properties_admin user as the definer to get around it.
One annoyance: installing MariaDB breaks MySQL Workbench due to dependency issues. I found a workaround for this; installing per instructions on this thread generates a lot of errors, but the app works.