I've now finished and tested the code for importing records into the db. The end result (which creates 438 primary title inserts, but thousands of ancillary records) was too large to run in a single operation, so I'm using
<xsl:result-document> to split it into five sub-operations. Each one produces a detailed report on what it's done, so that, combined with the source XML file, should be all we need in terms of an audit trail.
It took a little longer than expected because of two additional aspects I hadn't planned for initially. Both the lenders and the owners tables have mechanisms for distinguishing between individuals and institutions, and I ended up adding some code to determine whether new records being added fell into one or the other category, by examining the name of the new owner or lender. In addition, I decided to parse out the forenames and surnames of individuals.
I would post the script here, but it's very long, very messy, based on invented XML in the source document, and it's a one-shot task that we won't have to do again, so it's not of any general use. However, the techniques I've figured out for this probably will be used again.
I haven't yet run it against the live database, only against the dev db; I don't want to make major changes right at the end of the day, so I'm going to do it tomorrow morning, after taking a fresh backup, and recording the upper bounds of the current db tables, so that I can report to J-SR and VG, and they can look at the results.
No Pingbacks for this post yet...
A database project to collect historical data on properties and titles.
|<< <||> >>|