Seems to work perfectly, so I can now get dev in sync with live instantly for working on fixes.
After implementing the new triggers on the properties table, and running SQL to update all descs for owners and properties I have a "live" version of the Maple Ridge database now running.
I've left the original description field data in the properties table as "OLD Description", but once you confirm that there's nothing important in there, I'll remove that field; anything important from there should presumably be transferred to the notes field. The RAs will now start entering new data now, and let me know if they see any problems.
I've added a fetching green colour to distinguish this one from the development version, and added a warning notice to the dev version so that we don't get confused. I'll keep the dev version around for testing purposes. Next I need to write the script which copies the live data to the dev db, always a rather tricky thing.
This is basically what I'm proposing to the RAs as an algorithm for generating the desc field in the properties table:
SELECT concat( IF (`prp_township` != '', concat('T:', LPAD(`prp_township`, 5, '0'), ' '), ''), IF (`prp_district` != '', concat('DL:', LPAD(`prp_district`, 5, '0'), ' '), ''), IF (`prp_section` != '', concat('S:', LPAD(`prp_section`, 5, '0'), ' '), ''), IF (`prp_quadrant` != '', concat('Q:', `prp_quadrant`, ' '), ''), IF (`prp_plan` != '', concat('PL:', LPAD(`prp_plan`, 5, '0'), ' '), ''), IF (`prp_sketch` != '', concat('SK:', LPAD(`prp_sketch`, 5, '0'), ' '), ''), IF (`prp_lot` != '', concat('L:', LPAD(`prp_lot`, 5, '0'), ' '), ''), IF (`prp_parcel` != '', concat('PC:', LPAD(`prp_parcel`, 5, '0'), ' '), ''), IF (`prp_acreage` != '', concat('A:', `prp_acreage`, ' '), ''), IF (`prp_exception` != '', 'EX', ''), ' (#', `prp_property_id`, ')' ) AS test FROM `props` WHERE 1 ORDER BY test LIMIT 60
Waiting for their feedback now.
I've changed the triggers that create the drop-down list entries for owners to add in the city component, at the request of the RAs. Those triggers had become so long and fragile due to the geometrical explosion of possible full and empty fields that I've simplified them considerably, perhaps at the expense of some elegance; you should see all the data that's available, but there might be trailing commas or similar oddities if you hit upon a particular combination of full and empty fields that I haven't allowed for.
The process of doing this pushed me into creating a database-level function, for stripping repeated spaces, which I've done like this (nicked from Stack Overflow -- set delimiter to //):
CREATE FUNCTION `clean_spaces`(str VARCHAR(1024)) RETURNS varchar(1024) CHARSET utf8 DETERMINISTIC BEGIN while instr(str, ' ') > 0 do set str := replace(str, ' ', ' '); end while; return trim(str); END//
I can now write triggers that join fields together separated by spaces without having to check their length first, then just trim all the extra spaces, which should help keep things simple. The existing owner descriptions remain unchanged. To update them, if they need to, the RAs can simply edit and save the record without changing anything; that will re-generate the representation that appears in the drop-down.
I'm now going to start figuring out best to patch together all the bits and pieces for property descriptions.
With JG reviewed printed spreadsheet documents and dotty maps of Steveston properties.
The aggregated person list at the start (1250+/- records)
- includes records for people in Steveston in the 1930s which don't appear in any of the 1930s zone-colour lists (possibly people without a fixed address?)
- none of the versions of the movement.xls file we received reconcile exactly with that printout; the closest appears to be movement3.
TO DO : decide if we want to use this list (not sure we do) and if so, create xsl file that duplicates what's printed.
Many records in the the zone-colour tables have no street address whatsoever, but have dots on the map, so we should be able to infer a street name and a block range of possible house numbers for that record.
TO DO : decide exactly how to implement this (esp distinguish address data as found from address data we add)
A couple of instances where >1 person (or married couple) is associated with a given record/dot. One is list of tenants in an apartment in zone2Red and one is a list of records consisting only of names in zone4Green.
TO DO : decide how to normalize these (?add records to table, add "other people" text field)
A number of institutional and business names are listed in the surname field.
TO DO : decide how to distinguish business and personal names
Do we include 'origin' field and 'notes' field? latter appears to be largely record of which camp person was sent to.
Added Previous Title Notes and Subsequent Title Notes columns as planned; at request of RS, added new Parcel and Acreage fields. Still waiting on a good opportunity to add the property linking table and related fields.
Out of discussions this morning, I'll make the following changes to the SQL db:
- Change the Traces field to be Previous Title Notes.
- Add a new Subsequent Title Notes field alongside it.
- Add a new table linking two properties, with previous and subsequent keys.
- Add adaptiveDB one-to-many fields for Previous Property and Subsequent Property to the Properties table, both using the same linking table but inverted.
I'll try to do this first thing in the morning before they start work.
For the spreadsheet generation, I've added a new column "TOUCHED_BY_CUSTODIAN", which has 1 for any property which has been on a title which has been bought or sold by the Custodian. I've also set the cust_gen and noncust_gen columns so that they always show a zero when the field is empty.
Finally, I've figured out how to create transparent PNG Zoomify layers in OpenLayers, implemented it, and added additional features to the new Read map: opacity sliders for the two overlay layers, and an overview control.
The Titles team had their first day working with the db today, and came back with some suggestions that we're now going to discuss in detail tomorrow. They also reported that the owner pre-insert trigger was failing, and when I tested, it was; weird because I thought I had it working. Anyway, I tested again, found a couple of bugs (I had a variable name wrong in a couple of places, and I was prefixing variables with @, which I shouldn't have been doing -- it's a mystery to me when you're supposed to do that and when not, in SQL). For the record, this is what appears to be working:
DROP TRIGGER IF EXISTS `landscapes_mapridgedev`.`own_desc_insert`// CREATE TRIGGER `landscapes_mapridgedev`.`own_desc_insert` BEFORE INSERT ON `landscapes_mapridgedev`.`owners` FOR EACH ROW BEGIN DECLARE next_id INT; SET next_id = (SELECT `AUTO_INCREMENT` FROM `information_schema`.`TABLES` WHERE `TABLE_SCHEMA`=DATABASE() AND `TABLE_NAME`='owners'); SET NEW.`own_desc` = CASE WHEN (LENGTH(NEW.`own_surname`) > 0 AND LENGTH(NEW.`own_forenames`) > 0 AND LENGTH(NEW.`own_street_num`) > 0 AND LENGTH(NEW.`own_street`) > 0) THEN concat(NEW.`own_surname`,_utf8', ',NEW.`own_forenames`,_utf8' : ',NEW.`own_street_num`,_utf8', ',NEW.`own_street`,_utf8' (',next_id,_utf8')') WHEN (LENGTH(NEW.`own_surname`) > 0 AND LENGTH(NEW.`own_forenames`) > 0 AND LENGTH(NEW.`own_street`) > 0) THEN concat(NEW.`own_surname`,_utf8', ',NEW.`own_forenames`,_utf8' : ',NEW.`own_street`,_utf8' (',next_id,_utf8')') WHEN (LENGTH(NEW.`own_surname`) > 0 AND LENGTH(NEW.`own_forenames`) > 0) THEN concat(NEW.`own_surname`,_utf8', ',NEW.`own_forenames`,_utf8' : (no address)',_utf8' (',next_id,_utf8')') WHEN (LENGTH(NEW.`own_surname`) > 0) THEN concat(NEW.`own_surname`,_utf8' (no forenames or address)',_utf8' (',next_id,_utf8')') WHEN (LENGTH(NEW.`own_institution_name`) > 0 AND LENGTH(NEW.`own_street_num`) > 0 AND LENGTH(NEW.`own_street`) > 0) THEN concat(NEW.`own_institution_name`,_utf8' : ',NEW.`own_street_num`,_utf8', ',NEW.`own_street`,_utf8' (',next_id,_utf8')') WHEN (LENGTH(NEW.`own_institution_name`) > 0 AND LENGTH(NEW.`own_street`) > 0) THEN concat(NEW.`own_institution_name`,_utf8' : ',NEW.`own_street`,_utf8' (',next_id,_utf8')') WHEN (LENGTH(NEW.`own_institution_name`) > 0) THEN concat(NEW.`own_institution_name`,_utf8' : (no address)',_utf8' (',next_id,_utf8')') WHEN (LENGTH(NEW.`own_street_num`) > 0 AND LENGTH(NEW.`own_street`) > 0) THEN concat(_utf8' : (no name): ',NEW.`own_street_num`,_utf8', ',NEW.`own_street`,_utf8' (',next_id,_utf8')') WHEN (LENGTH(NEW.`own_street_num`) > 0 AND LENGTH(NEW.`own_street`) > 0) THEN concat(_utf8' : (no name or street number): ',NEW.`own_street`,_utf8' (',next_id,_utf8')') ELSE concat(_utf8'(no name or address)',_utf8' (',next_id,_utf8')') END; END //
(After setting the delimiter to //.) We'll see if it works for those in the field tomorrow. Fingers crossed.
I also updated the GIS spreadsheet generation code so that it uses the column headers supplied by JSR, per his request.
Got three layers from SB, a base layer, a 1930 and a 1940. The latter two were supposed to be transparent PNGs, but although they were PNGs, they had no transparency; edited them to remove the background and generated zoomify tiles from all of them, using vips:
vips dzsave LOI_mapleridge_1930_trans.png 1930_png --layout zoomify --suffix .png vips dzsave LOI_mapleridge_1940_trans.png 1940_png --layout zoomify --suffix .png vips dzsave LOI_mapleridge_3_lg.jpg img --layout zoomify --suffix .jpg[Q=100]
Set about trying to get the OL code to handle the transparent layers, but hit a problem almost immediately; I've ended up posting a question explaining it here. However, in the meantime, I've generated non-transparent layers that all have the base layer as a background, and made a switchable version. This will have to do for the moment. I have the transparent tiles ready to go in case a solution emerges, but it may be the case that the Zoomify source just can't handle anything except JPGs; if that's so, I need to figure out how to create a regular XYZ layer, using vips to create the tiles and then a regular ol.source.XYZ to create the layer.
I've set up and configured the new Land Titles database for Maple Ridge, using the latest AdaptiveDB code to provide both a read-only interface and the editing interface. The read-only interface is new, and it should provide a simpler way to search and find things when you don't want to risk editing or deleting something by accident. I've let the relevant team know all the details below, and I'm waiting for their confirmation that they can log in and see it.
I've coloured it slightly differently so it's clear you're not in the original Powell Street database. There are a couple of test records in there, which they can delete when they want to get started entering their own test data.
I've solved the problem we had last time around with the owner description for a newly-added owner having the id "0" instead of its real id in the drop-down list in title records (see the previous post). That's been bugging me for ages.
Some tables are no longer visible (ethnicity, locations) because their records are fixed for now and don't need editing.
There are a couple of other changes:
In the Titles table there is a new one-to-many field for "Extinguished properties", arising out of our discussions the other day. This is intended for a case where a new title and a new plan come into force, creating one or more new lots, and old lots are then "extinguished" (they don't exist any more). This relationship can also be edited in the Properties table. We hope through this to be able to track more easily when a property should no longer be processed in statistical analysis and similar contexts.
Another difference relates to the Property table. There's a visible field called "Description". In the old db, that field was invisible to editors, and was auto-populated with a generated string of text that (hopefully) uniquely and usefully identified the property when it appeared in a drop-down list in the Titles record editor. However, because we don't yet know how the properties will best be described in the Maple Ridge context, we can't yet write an algorithm to auto-populate the field; that means that editors will be responsible for entering that text manually for the moment. They will experiment with what works best in that field, and once they're happy, we can turn it into an algorithm and hide that field.
The test data has high id numbers (in the thousands, rather than starting from one or two). The reason is that I set all of the tables in the new database to start their record ids incrementing from the highest values in the old database. That should mean (assuming we don't go back and add a lot of new records to the old db) that ids for owners, titles etc. will remain unique across the two databases. That will help with processing down the line. Even if we do go back and add new records to the old db, if we do it carefully (by presetting the auto-increments there before we start adding things) we can ensure they allocate ids in different ranges.
This is a test database, and we expect to be hacking away at it and changing things in the initial stages. Any data in this db is not necessarily intended to be permanent. At some stage, when we're happy, I'll create the "live" database for the real data. At that time, it will be possible to carefully port over some existing data from the dev database in case they have worthwhile stuff they want to keep; the best way to do that would be to delete everything they DON'T want to migrate, and then let me know and I'll copy over all the remaining data.