I'm trying to find a straightforward way to calculate the areas of all the polygons in the lot maps created in QGIS by AC, but so far I haven't got one. Trying with OpenLayers and the GeoJSON output created with ogr2ogr, but for some reason the JS won't read the JSON files.
When CN met with me recently, she left a set of permissions forms which had not been uploaded to the server for me to do that. Today I scanned and uploaded them, and let her know.
She's the new Oral History person. Went through the shambles that is the current collection of ill-assorted and ill-named files on the server; she will take control of them and do some reorganization, de-duping and renaming. Went through the process with FileZilla, although she doesn't yet have a netlink id so she can't do anything for the moment.
I've written three tiny scripts to generate usable data in GML, GeoJSON and KML from the binary ESRI shapefiles created with QGIS, and I've added all those the repo. I've been investigating the simplest way to generate usable area calculations, and it seems to be this:
ogrinfo -dialect SQLite -sql 'SELECT ST_Area(geometry)/10000 FROM Plan47534' Plan47534.shpwhich generates a text output like this:
INFO: Open of `Plan47534.shp' using driver `ESRI Shapefile' successful. Layer name: SELECT Geometry: None Feature Count: 2 Layer SRS WKT: (unknown) ST_Area(geometry)/10000: Real (0.0) OGRFeature(SELECT):0 ST_Area(geometry)/10000 (Real) = 0.597841877829159 OGRFeature(SELECT):1 ST_Area(geometry)/10000 (Real) = 0.7206513230636
This should be parsable with XSLT (for instance), so an ant task to generate all these things, and hook them up with their lot ids (in ogr:UID elements in the GML files), then generate a stack of SQL statements to update the databases, would seem to be straightforward, if JSR wants us to go that way tomorrow.
Finished the redraft of the article for submission to another journal, and submitted it.
JAEH reviewers reckon, quite rightly, that my article is DH and not ethnic history, so I'm now rewriting it for another CfP that just came out.
Received a mailed thumbdrive from RS containing her oral history work. Made a local copy, which will be backed up to Rutabaga as a matter of course, and pushed a copy up to nfs in loi/oralhistory/interviews/fromRebeca2017.
AC took GN and me step by step through her working process, to ensure that we understand everything. We determined afterwards that the QGIS project files were in fact being stored outside the repo, so we copied the Maple Ridge ones into the repo; they don't work at the moment because they're full of hard-coded and broken relative paths, but they're XML so that will be easy to fix. In the afternoon I reviewed and commented on the protocol document.
Did some archaeology on the apparent missing titles from Powell St and reported as follows:
My backup of the database from September 2014 (when I think we were just creating it) contains a total of 3489 titles, including those for B43 L2. The next backup, from November, has only 1513 titles, so nearly 2,000 were deleted. This was planned. These are my notes from our discussions and our action at the time: ----- October 3: Met with JSR and SF to discuss refining data in the LTD. First, we create a new duplicate of the existing db. SF will generate a list of known-good titles (in that they've been fully edited using the final protocols). I'll then generate lists of titles that don't match that set, which will be candidates for deletion; she will check those. Then we delete those. Then we generate lists of now-unlinked people (owners and sellers), other documents, and legal descriptions, which again are assessed as candidates for deletion. October 14: SF and I have been working on generating and checking lists of records which we believe can be deleted from the db. I've made a landscapes_backup db and cloned the current content into it before we start deleting; it looks like we'll be removing over 2,000 title records, but we're still doing some checking; then we'll remove associated unlinked items. October 17: Did a number of planned deletions, and then some additional work identifying now-unlinked owners; there are 2713 which could be deleted. Many of these are additions by the recent research team, which were added, but whose titles were then mistakenly linked to earlier identical or similar entries. SF is now analysing this situation, and we will eventually prune all the unwanted owners from the system. Then comes the issue of identifying and linking or merging owners we believe to be the same person. October 27: Generated lists of owners who were previously connected to Block 43 and 52, so that they can be eliminated from the db if they're now no longer connected to anything else. SF is doing this work. November 7: SF and I worked through a lot of different approaches to confirming that no useful data was deleted during our cleanup. We have plausible explanations for all but 32 of the orphaned owners; and we have identified about ten titles which were deleted by editors during the summer work period; these must have been purposefully deleted around the time they were created -- they never got saved into a backup -- so they must have been erroneously entered. I think these are the plausible explanation for the orphaned owners. In the process I added a new titles as seller field to the owners table, and we confirmed the consistency of data in lots of other respects, so we're looking good. ----- So we know these deletions were intentional, they were carefully checked, and they seem to have been primarily associated with blocks 43 and 52. If the intention was that these titles should be re-captured, that apparently never happened.
Email discussion with NH about approaches to obscure, old and obsolete characters; I've adopted a couple of the ideas that resulted into the schema, and elaborated some of the existing descriptors we have. We're slowly getting towards an understanding of how best to approach these oddities.