Leaving a little early. No-one else needs to stay till 4.30 today.
I have the link checker requested by AC coded up and working, and it's been tested in the VPN, Manifest and Trials dbs (tested in dev, then rolled out to live). I will also roll it out to JSRDocs when I get a chance (there's no dev there, so I always do that one last), and possible Properties (although there are no links in that DB AFAIK).
Fixing an urgent XSLT bug in MoEML.
With JJ, have been working on aligning our byline attribution phraseology with Marc Relator terms. We're almost there.
MG, a researcher from Rennes looking into DH Centres, came to interview me about our infrastructure, projects and organization.
I have a working link-checker for FLD_LINK and FLD_LINKSET fields throughout the database, which uses CURL to check links and reports when they fail. There is still some work to do on this, because I'm not yet correctly escaping characters such as apostrophes which should never turn up in filenames or folder names, but which do regularly turn up because users love to ignore instructions. I'll have to decide whether I should report these as errors even though the links work (on the basis that the names are likely to cause problems down the road), in which case I should also add regex checking to the input fields on the page, or whether I should just escape the characters so that CURL can handle them, and give up on trying to badger people into using sensible filenames.
This feature was requested for VPN, but will also be useful for JW's projects.
On early and late duty (no-one else in the office). Plus we had an emergency with a PHP4 application.
Wrote the next batch of notes for the course.
<rs> into the schema, then created the eventography file, with one sample event, and marked the event up in two files using
<rs>, as a demonstration for CB. Once we have some real data in the eventography, I'll add the requisite XSLT to make it functional.
Fixed several hundred encoding oddities and errors related to name tags and docAuthor tags. Also updated Schematron to try to catch some of these. We're now in a position where we have almost all names marked up as either @type="person" or @type="pageant", and the question now is whether we should simply make "person" the default, and therefore optional, and save a bunch of space in files by deleting all instances of it. I see no reason not to do that. There are some name tags which lack it already, and they're working properly.
MC changed the include_path and extension_dir settings -- it appears there was an update in May, at which time these should have been changed but weren't. However, the app still won't work. There are two remaining settings that look wrong to me:
MYSQL_INCLUDE no value MYSQL_LIBS no value
On a phpinfo.php for PHP5, these show as:
MYSQL_INCLUDE -I/usr/local/include/mysql MYSQL_LIBS -L/usr/local/lib/mysql -lmysqlclient
I'm waiting to see if a fix to these will do the job. If not, we'll have to start picking carefully through every one of the settings, trying to figure out what might be wrong. The only other thing I can think of is that PEAR might be broken somehow in this new install.
In the long run, we should get Agenda shifted from PHP4 to PHP5. If this update was done in May, and we're the first to report that the paths were broken, then clearly hardly anyone is using it any more.
This morning I was contacted by two different departments to report that the Agenda application used to schedule classes is not working. On investigation, I discovered that the login page is truncated immediately at the point where the first call to the database is made. After reading through the code, I couldn't see anything wrong, so I suspected a connection problem such as an ACL issue between web.uvic.ca/lang and csmgenr2 (the mysql server). The db is there, functioning, and accessible through PhpMyAdmin.
I raised a ticket with sysadmin, and MC got back to me to say that there doesn't seem to be a connection issue, but the error in the logs was with a PHP include:
2012-08-29T08:43:23-07:00 email@example.com user.notice php_cgi: PHP Warning: main() [<a href='function.include'>function.include</a>]: Failed opening 'DB.php' for inclusion (include_path='.:/usr/local/php-4.4.8/include/php:/usr/local/php-4.4.8/lib/php') in /home3/80/lang/www/agenda/Application_Files/script_files/php/database_connection.php on line 11
The error comes when the db connection script tries to include DB.php, which is the PEAR library this project uses for accessing MySql. This sent me to check the PHP settings info in phpinfo.php, and I found a number of oddities:
My current theory is that when PHP was updated from 4.4.8 to 4.4.9, those key settings in the master php.ini file should have been changed, but weren't; and as a result, the include of DB.php is failing. I've reported this back to sysadmin, and I'm waiting for a response. In the meantime, I've tried overriding the include_path setting in a local php.ini, but I can't get that to work; perhaps those settings can't be overridden.
In the meantime, I could no longer bear to look at the logo which proclaimed "Agenda / Organize. Mangage. Simplfy" (sic, seriously), so I've fixed that.
Changes to course information on SA's instructions.
Working on standardizing the current byline markup so that I can use it as the basis for generating more formal
<respStmt> elements using our taxonomy of responsibilities. I have all names now marked up as names, but the informal descriptions of their roles are going to be very difficult to parse into clear categories.
Standardized our usage of
<list>, removing a couple of obsolete usages, adding documentation to the schema, and tweaking the XSLT. Many files changed.
Over 300 errors in CSS now corrected.
On early and late duty because no-one else is here.
Generated a single-file corpus from the collection, and ran css.xsl on it to generate a "stylesheet" which could be validated. There were over 300 errors, so I've been working through them, fixing typos and other problems with CSS in @rend attributes. I've got about half of them done so far.
Fixed the search bug that was returning multiple copies of the same search hit in the results; it was caused by failing to take account of cases where there were multiple search hits with the same parent. Also found a bunch of bad CSS values in @rend attributes and fixed them. I need to do a formal search through the whole corpus for these.
I noticed the other day that when you clicked on a search hit in the search results, the link took you to the relevant document, but not to the specific hit you clicked on. I've now fixed that, but another one persists; for some hits in some documents, the same hit is being returned multiple times in the results. Working on that now...
Continuing my process of writing detailed notes on all the topics before we start creating presentations, I've written the notes on XML Namespaces this morning.
The table returned from record deletion was embellished with the old column-header filter fields. I've now got rid of those, since they're obsolete. PUshed out the fix to all current live projects.
At JW's request, converted the Court field in the main Trial table from a single item to a one-to-many. Here's the SQL:
/*Create the new linking table.*/ CREATE TABLE IF NOT EXISTS `courts_to_trials` ( `ctt_ctt_id` int(11) NOT NULL auto_increment, `ctt_tr_id_fk` int(11) default NULL, `ctt_ct_id_fk` int(11) default NULL, PRIMARY KEY (`ctt_ctt_id`), KEY `ctt_ibfk_1` (`ctt_tr_id_fk`), KEY `ctt_ibfk_2` (`ctt_ct_id_fk`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ; /*Add constraints to the new linking table.*/ ALTER TABLE `courts_to_trials` ADD CONSTRAINT `ctt_ibfk_1` FOREIGN KEY (`ctt_tr_id_fk`) REFERENCES `trial` (`tr_id`) ON DELETE CASCADE ON UPDATE CASCADE, ADD CONSTRAINT `ctt_ibfk_2` FOREIGN KEY (`ctt_ct_id_fk`) REFERENCES `court` (`ct_id`) ON DELETE CASCADE ON UPDATE CASCADE; /*Copy existing data.*/ INSERT INTO `courts_to_trials` (`ctt_tr_id_fk`, `ctt_ct_id_fk`) (SELECT `tr_id`, `tr_court` FROM `trial`); /*Delete original field.*/ ALTER TABLE `trial` DROP FOREIGN KEY `tr_ibfk_1`; ALTER TABLE `trial` DROP COLUMN `tr_court`;
Updated the local_classes.php file appropriately. Tested on dev, then ran on live. In the process of testing, found a little bug: when deleting a record, the table that comes back still has all the old column-header filter fields in it. I'll fix that now.
Timesheets for Aug/2 done.
Shutting down early to clean up my desk area, and then leaving a couple of hours early.
Processed all the files, tested locally, made some fixes, and migrated the changes to the live db. Also updated documentation on date encoding.
Needed to have all my ducks in a row ready for conversion of MoEML dating methodology and rendering tomorrow morning.
Lots of changes, most of them relating to the date coding changeover tomorrow, but also others:
Good to go for the big conversion tomorrow.
New account on home1t has been set up for storage of page-images, so I've set up the db to point to it, and tested it. I've also tweaked the dev_to_live_update.sh script so it doesn't copy itself into the live db tree.
Getting presentation finished and off my plate...
GN and I met with JS and LG -- JS is handing over to LG for a year, so LG will be our contact on Cascade projects.
Links to biblio items on the ids page weren't working. I've now fixed it so that they show a little popup with the full biblio item. In the process, I identified a slight stupidity in XHTML page construction, so I've set myself a task to fix it.
In the process, made lots of SVG diagrams that may be useful in future, and clarified some of my own ideas a bit.
Default Oxygen XML parser prefs include (under the Relax NG section) 'Add default attribute values' set to checked (on). As a result, transforming a document using tei_all will produce output with (no surprise now) default attribute values.
Either turn this option off or don't transform tei_all documents.
Meant to get out early, but tech support for db application came up.
Wrote some more lecture-style stuff about schemas, and created some sample materials.
On Friday I'll be converting all our existing date encoding to make proper use of the custom dating and calendar attributes, as well as updating the XSLT to take account of this, and providing new documentation. Today I wrote and tested the XSLT conversion code for the source documents, and wrote the first half of the documentation for encoders. What remains:
We don’t need to mark-up dates in Contributor bios. They function as strings of characters in the bios.
1700 would be a reasonable cut-off for other dates. After 1700, we’re into secondary sources, not primary sources. And whatever happened on a location after 1700 isn’t really “early modern” by the conventions of the discipline of English studies. (Early modern is more capacious for historians.)
The guiding principle needs to be: are we going to do anything with this date? Is it harvestable data, or information that we need to manage the site effectively? If not, then it’s just a string of characters.
Changed a title and added a poster on JS-R's instructions; also did some research on easy ways to do free event registration, and found three plugins that might be usable. This will be required for one of the talks, which will be a big draw, and must be limited to 80 attendees.
On instructions from JW, added a new table and two new fields to the db. Tested first in dev, then carried out the same changes in live. SQL:
CREATE TABLE IF NOT EXISTS `period` ( `pd_id` INT NOT NULL AUTO_INCREMENT , `pd_name` varchar(128) collate utf8_unicode_ci default NULL , PRIMARY KEY (`pd_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; INSERT INTO `period` (`pd_id`, `pd_name`) VALUES (1, '[Unassigned]'); INSERT INTO `period` (`pd_id`, `pd_name`) VALUES (2, 'AR'); INSERT INTO `period` (`pd_id`, `pd_name`) VALUES (3, 'FR'); ALTER TABLE `trial` ADD COLUMN `tr_period` INT(11) NULL AFTER `tr_cote`; ALTER TABLE `trial` ADD COLUMN `tr_photos` INT(11) NULL AFTER `tr_description`; ALTER TABLE `trial` ADD CONSTRAINT `trial_ibfk_3` FOREIGN KEY (`tr_period`) REFERENCES `period` (`pd_id`) ON DELETE SET NULL ON UPDATE CASCADE;
Corresponding changes made to local_classes.php. Also wrote a script to copy data from live to dev db, and fixed a minor bug in the dev_to_live_update.sh script.
On late duty, and then departmental website changes came in, so I got them done before leaving.
Made some more progress on the TRUTH presentation. It's coming together slowly.
Made a number of changes, on BB's instructions.
I've rewritten my original TitleSortComparator class so that it not only handles leading articles but also decomposes the ash character into an ae sequence before doing the comparison. This handles the outstanding issues with sorting of both the historical personography and the bibliography.
This is what had to be done:
MolSortComparator. It should be used as a model for future comparators and collators for eXist-only (no Cocoon) projects.
<xsl:sort select="persName/reg" order="ascending" collation="http://saxon.sf.net/collation?class=ca.uvic.hcmc.mol.MolSortComparator"/>
After deployment, eXist needs to be restarted, but I was able to do that from the Tomcat Manager interface (although it took several minutes for the app to shut down).
Here is an ordered sequence of a events that I think would make a good test case for journeys; Theseus and the Minotaur. It is quite simple, and involves relatively few places and participants. We might wish later to link this up with or include the origin of the minotaur, and the practice of sending it tribute.
These events are drawn from Apollodorus Epitome 1.5-10.
...completion of work started last Friday, on DR's instructions.
Leaving early. G&T creeping up again.
Started writing notes for the introduction to the November TEI course. I'm basically writing lecture notes, from which I'll construct some presentations. Not sure yet what to use for the presentation format.
I think it's all basically working correctly, except that now we will have to convert from using @*-custom attributes, and I'll then have to rewrite the mouseover code accordingly. The XML and XSLT will have to be changed at the same time. That's for next week.
The old site is still the live site, and will be for a while, so we're still updating it. Did some updates on DR's instructions.
Updated the VPN, Properties, Trials, Manifest, and JSRDocs database projects with the new codebase changes.
Right now the project_variables_SAMPLE.php file, which is copied to create a working file, contains a hard-coded set of instructions for the db. These should be linked via an include somehow, because as the db code develops, the instructions change, and the ones in project files get out of date and have to be updated manually.
Just blogging something I keep having to look up. This is how to switch your SVN repo from the old tapor URL to hcmc:
svn switch --relocate https://revision.tapor.uvic.ca/svn/[reponame] https://revision.hcmc.uvic.ca/svn/[reponame]
I'm just mapping out what I have so far from AC, based on our recent discussions. This will probably change a lot before implementation.
Met with ECH to plan and draft an abstract on the use of TEI in our project for ICLDC. Did a draft, which is too long, of course, and sent it to ECH.
As documented in the AdaptiveDB blog, I've done the basic enhancements to the AdaptiveDB codebase that were specified for phase one of VPN. I'll roll out the results to the live DB tomorrow. The only remaining tasks for me arising out of the meeting the other day are: the requirement to generate a bash script to check links automatically (still haven't figured out the best way to approach that), and the alteration of the db structure to hive off the illustrations (still waiting on feedback from AC on that one).
As part of the VPN project, phase one (agreed by the HCMC Committee), I've enhanced the Adaptive DB codebase with a couple of new features:
This development was done on the VPN dev db, and will be rolled out to the live version for full testing, before being rolled out to other dbs.
When marking up dates, tag monarchical reigns with notBefore and notAfter as follows:
When Stow says "yet then called the riuer of the Wels, which name of Ryuer continued: and it was ſo called in the raign of Edwarde the firſt"
tag the reign:
<date notBefore="1272-11-20" notAfter="1306-07-07">raign of
<hi>Edwarde</hi> the firſt</date>
This post based on an email exchange between JJ and SM.
Stayed late because someone wanted to work late, and I was deep in date conversion code anyway.
I have Julian to Gregorian date conversion popups working for simple dates (@when only). This is not yet on the live site because date ranges are not yet working, but I hope to have them in place soon.
There is a lot of unnecessary conversion between string representations and actual xs:date values all over the code, due to the way it's grown by accretion, and the way we've changed our minds about rendering rules. I should really look at this again, and see if I can take strings on input, convert to xs:date, do all manipulation with xs:date, and then render out right at the end. The problem here is that when we have partial dates (e.g. year-only dates), constructing an xs:date object requires the addition of month and day values, otherwise it fails; this introduces a spurious precision to a date which consists only of a year.
Pulling out and reworking real examples from codebase, doing some background reading on VariaLog, and rethinking a bit. Now waiting for some advice from CC.
Baptism dates are now rendering as prescribed by CB. This took longer than expected, because it's a special case, but still needs to be part of the regular rendering pipeline so that projected handling of non-gregorian calendar dates works as expected with it.
KS-W and I clarified and extended the existing system for classifying bios, and place and vessel definitions, and I updated the XSLT accordingly:
A number of existing bio entries will be reclassified from incomplete to unavailable by KS-W.
I've written a little bash script that makes updating an eXist db trivial.
The update script relies on a publicly accessible xquery script inside the target database that can return the timestamp of the most recently updated file in the /db collection. Martin wrote one that looks like this. You need to get it in to the db before the rest will work.
The update script itself is written in bash, and uses curl to interact with eXist. You can find the script itself here. It is fairly well documented, and only requires a few settings adjustments to work. It works by checking the local tree for the latest timestamp, compares it to the timestamp it got from the exist db, then creates a list of all local files that need to be pushed to the remote db.
To launch it in Ubuntu I use a Gnome launcher that invokes the script (the 'Exec=' line) like this:
Exec=gnome-terminal --title="Upload files to DB" -e "/complete/path/to/update-script.sh"
which opens a terminal and then runs the script.
Started work on my presentation for the TRUTH event in September.
At present, TOCs are only generated for documents which have multiple
<text> elements. This is resulting in documents which should never have multiple texts being created that way, just so they get an automated TOC. Obviously this is silly, and I need to revise the system so that it continues to support the existing documents (while we change them -- although perhaps not all will change), but supports TOC generation for single-text documents based on some measure of complexity. (Or perhaps all documents with multiple div elements should get a TOC by default?)
Did some editing of guidelines, specifications and minutes.
I now have a working and tested fuction which, given a Julian date at any level of specificity (YYYY, YYYY-MM, or YYYY-MM-DD) will return either a single Gregorian equivalent, or (more likely) a sequence of two Gregorian dates representing a corresponding range, accounting for the leap day offset and the March New Year issues.
Meanwhile, we've determined that in the context of the personography, birth and death dates will be rendered as en-dash-separated years only where both are precise and certain (i.e. @when). In other cases, the renderings will be split into b. xxx d. xxx clauses, so that the ranges, precision, certainty etc. on the two components (which may be different) can be expressed unambiguously.
The following from SM (examples in this file):
Met with JL, CP, IO'C, and DB-M re the georeferencing of maps from the Coldesp collection and the Library's collection. There will be another meeting in September to thrash out more details, and in February to look at some results from student work in a GIS class on some of the existing maps.
Deep in the vagaries of historical calendars, I needed to finish a bit of code to the point where I knew it was working before finishing.
I've started work on the date conversion code for handling Julian vs Gregorian proleptic. It's astoundingly complicated, but for now I've decided to handle two core features: the increasing day offset caused by the Julian leap year miscalculation, and the fact that between 1155 and 1751 the year was generally viewed as starting on March 25. There are dozens of other potential gotchas, but this will at least give us a basic way of handling conversion that will be approximately right nearly all the time.
The core ideas are:
I've spun off all the XSLT relating to dates into a separate module, and created a testing module that I'm building as I go along, so that we can verify everything is working without too much extra work whenever we make a change.
I've gone through SR's spec document for the form he wants to create to gather metadata from the authors of the articles in his encyclopedia. Quick and dirty would be SurveyMonkey, but it's not really set up to allow the user to add additional elements to the survey form. (e.g. name major works associated with artistic movement X and rate that work's importance outside the movement). I'm not sure he'll be willing to compromise the look and feel of the form to the degree I think will be required by SurveyMonkey.
Also created a very small ajax site that does allow the user to add elements to the form, and then processes the values of those elements to generate strings that go into an SQL db. Major concern is what to use as delimiters within those text strings. E.G. if the field in the DB is called Question1 and there are an arbitrary number of fields in the form whose values I must concatenate to produce the string that goes into Question1, what do I use to delimit each of those values from the form field? We're only using the SQL db as a temporary repository, so I don't really want to go to the bother of creating a full-on set of relational tables and all the code to modify then properly.
Leaving a little early. G&T is getting a bit high.
Summary for past two weeks:
July 30 - Aug 03 +3.0 : M +1.0 T 0 W +1.0 R 0 F +1.0
Aug 06 - Aug 10 +2.0 : M 0 T +1.0 W 0 R +1.0 F -0.5
Consider the following XML snippet:
<name type="first">Joe</name> <name type="last">Bloggs</name>
A common problem when working on an eXist project is that it gets serialized thusly:
That is, the whitespace gets discarded at some point along the line.
Previously, I've tried making adjustments to the XML directly (p xml:space="preserve") and adding processing instructions to XSL, like xsl:preserve-space. They are, at best, unreliable.
Martin and I worked through the various possible fixes, and discovered three ways to make this work:
1) when configuring your exist:serialize options, add 'indent=yes'
2) if handing something off to XSLT with transform:transform, send it a final directive 'indent=yes'
3) look for the string 'preserve-whitespace-mixed-content' in the main eXist config file ($existHOME/WEB-INF/config.xml) and change the value from no to yes. You'll need to re-index your collection before this will take effect.
We figure option 3 is the most practical way of addressing this as it doesn't add any overhead to your application.
Met with AC and worked out details of the phase-1 enhancements agreed by the HCMC Committee:
Adaptive DB enhancements and data reorganization:
Search interface enhancements:
Added the list of new lectures scheduled for this year, and tweaked the committee list, per JS-R.
Lots of people working, lots of queries and questions.
Attended a telco and took minutes. Some prep and follow-up as well.
The personography has been enhanced with codes for student contributors, and their page is now equipped with an auto-harvested table; CB will be checking the existing information from that page is all in the personography entries before deleting the old list.
The personography of historical figures has now been regenerated using the new system, at a new URL, and its table captions are now in boilerplate.xml; the table is now part of a static page which can be edited.
I investigated the use of
<respons> instead of
<occupation> for the person entries, but it has a required attribute
@locus, which is supposed to specify precisely what part of the element concerned the person is responsible for. This is not what we need to do at all, so
<occupation> is our only option, but I'll continue to raise this issue on TEI lists, because it appears there's no useful way to assign responsibility according to a formal scheme without using the inappropriate
Ordered HCMC office supplies today.
Arrival date: next day usually
As part of fixing the missing text in links in MIDD17, I tested and fixed the system for implementing links from one document to a fragment of another. This is basically how it works:
If you want to link from one document (say MIDD17) to another (say TRIU1), you can link to the whole document using mol:TRIU. However, sometimes you might want to link to a specific part of the TRIU1 document (which is a multi-text document). You can link to a specific
<div> in the target document like this:
<div>in the target document its own
@xml:id. For instance, we might want to link to the "Grocer's Company" section of the TRIU1 document, so we find the div that contains that section, and give it an
@xml:id. Our convention is that an @xml:id for a section in a target document should be created with a prefix that consists of the main document id (TRIU1) and an underscore. So we do this:
<ref target="mol:TRIU1#TRIU1_grocers">. In other words, mol: + the document id + # + the div id.
Links constructed like this are now working on the site (see MIDD17 links to TRIU1 for examples).
Lots of stacked-up work from my vacation...
Monographic and journal titles need to be italicized, but when nested inside another of the same, they need to revert to roman type. That is now working.
Note to self: there are a lot of old CSS classes that we're no longer using, which could be cleared out to simplify the CSS.
Met with JJ, SA, and GG to plan the outline of the course, and discuss logistics.
Wrote the abstract for our TRUTH presentation in September, and ran it by Claire, then submitted it to US, along with a short bio. I'll work on the presentation over the next couple of weeks so it's ready in plenty of time.
We're currently using a rather messy textual classification method based on the use of
<classCode> pointing at a non-existent scheme, and what's more, our classification codes seem to overlap a bit, and fall into two distinct classes. I think it's time to revisit this aspect of our encoding, and put it on a sound formal basis. To that end, I have:
global_metadata.xml, in which we can centralize a variety of metadata and link to it (this should include thinks such as availability/licensing, eventually).
<revisionDesc>/@status was only able to be set to "proofing". We now have a set of document status values which I think will be more useful.
I think we need two separate taxonomies, one for text types and one for content types (e.g. prose vs religion). Then we can add any number of
<textClass> elements to any given document, pointing at the specific scheme and code, and use these to filter documents in specialist TOCs and in the search interface.
We should also presumably look for any existing applicable taxonomies that we could adopt.
This arises out of my preparation of the documents for submission to the TAPAS project, which required some standardization of data in the headers. I also removed the pointless "An Electronic Edition" subtitle from all our documents, and tweaked a couple of other things.
Just blogging this so that the next time I forget it, I don't have to rediscover it again:
Catching up after long vacation. Lots of backups etc. to do.
Met with CC to discuss the grant application and the TRUTH presentation in September, and also fixed a couple of things in the db (publishing Le Blanc).
Edited the personography and associated code as follows:
<persName>/@type "tech" is now obsolete (and should be removed from the schema -- note to self). All "tech" people are now "cont" (contributors).
Changed the revisionDesc/@status values to remove "incomplete", and substitute "stub" and "empty" for the two types of incomplete pages. Suitable boilerplate is now added to pages with these two values from boilerplate.xml, and CB has updated all pages so their values are presumably correct. Some pages with other status values still have a stub message hard-coded into them, but CB will fix this.
<mentioned> elements are now handled correctly (rendered as italics).
The Chrome bug (rendering
<q> with straight instead of curly quotes) has been reported to the Chrome project.
These instructions were originally posted by MH in Flow.
We're going to be moving over to using a curly apostrophe (Unicode character U+2019, which is the same as the right [closing] "smart" single quote). This means you'll need to be able to type this character in Oxygen when you need it. You can do this using a Code Template.
First, find that character in your Character Map, and copy it to the clipboard so you can paste it when you need it. Here it is, in case you want to copy it from here: ’
Click on Options / Preferences, then type "Code Templates" into the filter box at the top. You'll see a list of the existing Code Templates. Click on New, then fill in the following details:
Description: Curly apostrophe (U+2019)
Associate with: XML Editor
Content: [Paste the apostrophe in here]
Then press OK.
Now you can do this to insert a curly apostrophe:
Press Control + Space. You should see the Code Template selector appear, and there is only one template, which is this one. Press OK to select it.
These instructions are adapted from the instructions posted to Flow by JJ on 3 July 2012.
Netlink ID: london
Full address: firstname.lastname@example.org
Obtain password from JJ
All of the emails sent to this account are automatically forwarded to email@example.com. Obtain password from JJ and see documentation/emails.odt in the SVN repository for more information on using the Gmail account.
These intructions were originally written by JJ and posted to Flow on July 4 2012.
Go to MLA website: mlahandbook.org
[Get user id and password from JJ.]
Search function works fairly well if you know MLA already.
These instructions were created by JJ and were originally posted to Flow on July 4 2012
Go to http://webstats.uvic.ca/.
Username = jenstad
PW = [ask Janelle]
Once you are in, you can view sessions, page views, hits, referrals, domains, entrance pages, and exit pages ... and sort by date. Normally, you won't need to check these stats unless Janelle asks you to perform a specific task.
1. The problem of duplicated files mentioned in the previous blog post has been solved.
2. The inverted transcript for frac1 and frac2, as mentioned in previous blog post, has been corrected.
3. The problem with prmf6 mentioned in previous blog post has been solved.
4. ES and SA encountered a strange problem with the list of items in Keywords. Some items (grandes écoles, système éducatif français, classes préparatoires) are displaying in the list while not showing anywhere in the xml files. In the list these words have to be removed, and the items "musique" and "vie personnelle" should be translated into English.
SA solved the problem.
5. ES added the transcript for all videos that are available and on the site. Nine of them need to be annotated.
6. The [age] section of the search function, when selected on its own leads to an error message. However, when selected with another filter (e.g. [male] + [10-25]) the age filters functions properly. SA will look into this.
7. SA contacted Pat with regards to thumbnails. If this can be fixed quickly, it will be done before SA goes on vacation at the end of this week. Otherwise, the site will go live and thumbnails will be added at a later stage. SA will inform ES and CC.
8. ES noted an issue with the display of "there are no other video from". While the syntax in French has been fixed, now only "from Mali" or "du Mali" is displaying.
9. ES asked SA whether it could be possible to make a search within annotations optional so that the site user can choose to look for a word in the transcripts only or within the transcripts+annotations. In theory, this could be done. More discussion to follow if CC agrees with this idea.
Hours worked since beginning of August: 15
SA uploaded latest changes to Francotoile21.
(1) ES noticed the following problems to be fixed before moving to the production site:
- veqf1 should be removed from server as it was renamed/replaced, as follows: vepf2.
- mixc1 should be removed from server as it was renamed/replaced, as follows: mixbc1.
- eduf2 and eduf3 should be removed from server as they were renamed/replaced, as follows: pscf2 and pscf3.
- prmf5 appears twice on the map (?) Both files should be deleted from server, as the video is missing.
- lafc1 should be removed from server as it was renamed frac2. NOTE: ES notices that transcripts are inverted - frac1 has the transcript of frac2 and frac2 has the transcript of frac 1. This has to be amended.
- cltca1 and lacfa1 should be removed from server as they were renamed/replaced, as follows: cltq1 and vepq1.
- prmf6 (Valentin's video) displays as xxxx1 on the map (??) The video is not playing. Will need to look into this.
(2) In response to SA's questions, here is the list of videos that are missing thumbnails:
mixbc1 ; fraq3 ; fraq4 ; fraq5 ; fraq6 ; vepq1 ; cltq1 ; franb1 ; franb2 ; franb3 ; cltc1 ; frac1 ; frac2 ; edum1 ; pscf2 ; pscf3 ; pscf4; prmf6 ; cltf4 ; vepf2 ; accf2 ; fraf3 ; vepf1 ; lsrf3
Second part of bisected vacation: August 2 through August 3.
Came in for postdoc interviews, but didn't do a full day.
First part of bisected vacation. Jul 9 through July 31 2012.
Interviews #31, & #32 completed today.
SA received request from MK (P&A) to review proposed new website and provide
feedback before website is submitted for approval.
Task: compare spreadsheet structure to created website content and layout; note differences, typos, omissions and changes; recommend changes, check for adherence to guidelines.
Completed my observations and forwarded them to SA. SA & JN discussed our individual lists; SA then summarized both our lists and sent email to MK (cc'd me) with a list of
|<< <||Current||> >>|