Title: Human Computing: Modelling with Meaning

Author: Willard McCarty
Author: Meurig Beynon
Author: Steve Russ
Statement of responsibility:
Marked up by Martin Holmes
Patricia Baer
Marked up to be included in the ACH/ALLC 2005 Conference Abstracts book.
Source(s):
None
Text classification:
Keywords:
3-paper session
Keywords:
  • humanities
  • Computing Science
  • experience
  • MDH: Created from John Bradley's XML April 2005
  • MDH: Entered proofing corrections from PGL 27 May 2005

Human Computing: Modelling with Meaning

Willard McCarty   

King's College London

Meurig Beynon    wmb@dcs.warwick.ac.uk

University of Warwick

Steve Russ    sbr@dcs.warwick.ac.uk

In bringing the humanities and computing together, the question of how computer science relates to the humanities has to be addressed. Most striking is the starkly different treatment of meaning in the humanities and in computer science. To ignore this issue is to risk investing our limited notion of computer science with unwarranted authority. The commonplace view of computer science suggests a monolithic image of computing, in which all activity reduces to the execution of formal algorithms. Computing in the wild, in contrast, is both incorrigibly plural, and rich in possibilities for marrying a science of computing with a computing of the humanities. This session is designed to explore one such possibility.
The standard way of construing computer science focuses on combinatorics, syntax and algorithms. Its guiding question is "what can be automated?" (Denning). The benefits of asking this question are undeniable — more efficient pattern-matching, more advanced data mining, better data representation and the like. But these benefits, and the question that elicits them, do not address the humanities intellectually. They pertain to a relationship analogous to that between an accountant and his or her calculator — hardly a promising one for computing practitioners and humanities scholars alike. If we wish to have a computing of the humanities, we need to be asking a rather different question: "how can we best integrate automated processing with human thinking and acting?"
Empirical Modelling (EM-website), the approach around which this session has been organized, reflects a radical shift from the logical and linguistic philosophical stance of theoretical computer science to one based on the pragmatic empiricism of William James. It has been developed by two of the authors, Beynon and Russ, at Warwick University. The third author, McCarty, has independently developed a convergent idea of modelling based on the tradition of experimental science, recent historical and philosophical analyses of experiment and the phenomenology of Martin Heidegger, Michael Polanyi and others. The convergence indicates, all will argue, a highly promising basis for interchange between computing science and humanities computing. This basis takes us considerably further than previous attempts. (See e.g. Gardin; Koch; Computing the Future 1992; Computing and the Humanities: Summary of a Roundtable Meeting 1998; Orlandi; Beyond Productivity 2003. See also Transforming Disciplines: Computer Science and the Humanities, http://www.carnegie.rice.edu/.)
The first paper discusses the prospects for partnership between the humanities and computing from the alternative perspective afforded by Empirical Modelling. It identifies perceived dualities that separate the two cultures of science and art as the primary impediment to this partnership, and outlines how these can be dissolved in a vision for 'human computing'.
The second paper illustrates the key characteristics and potential for EM for the humanities with reference to a projected modelling exercise addressing the Erlkoenig theme (as represented in the work of Goethe, Schubert and Liszt). It also highlights how each of six varieties of modelling identified by McCarty in (2004) can be represented within an EM model.
The final paper discusses the implications of EM with reference to McCarty's account of the key role for modelling in the humanities (2005), and considers these in relation to James's "philosophic attitude of radical empiricism" and ideas from phenomenological sources.

Computing in the Humanities - Servant or Partner?

Meurig Beynon and Steve Russ
The term humanities computing evokes two images of relationship: one in which computing is the servant, the other in which it is a partner. To traditional humanists computing-as-servant is unproblematic — who does not wish to be served? But the more challenging notion of computing-as-partner promises the greater intellectual rewards. This paper proposes 'Empirical Modelling' as the basis for a new vision of human computing through which a strong and fruitful partnership can be built.
Humanities and computing in partnership?
When we trouble to take a close look, rather than simply to relegate computing below stairs, its relationship to the humanities seems deeply troubling: on the one hand, flawless manipulation of data; on the other, contingent interpretation. We are reminded of the familiar two cultures caricature of the relationship between arts and science (cf. Collini in Snow). Unfortunately, the majority view of computer science (CS) sits comfortably alongside this popular caricature. At the theoretical end, where the designation science best fits, CS describes formal, objective meaning as a computational recipe. But at the practical end, where application programming is done, CS faces the fourth decade of a messy software crisis. Uncertain human situations, including scholarly ones, have not meshed well with the science. Hence the quite separate concerns of theoreticians and practitioners within the field. New trends in computing subvert their separation, however. The manner in which data is represented and presented to the scholar is now open to negotiation, and it has become clear that different modes and technologies for presentation have significant cognitive implications. Neither the programmer nor the scholar is well-adapted to cope with this state of affairs.
Modern developments in practical computing present a serious challenge to computer science as it is currently understood. The sharply differentiated treatment of formal and informal meanings of programs is oriented towards applications in which mathematics plays a central role. This traditional view of computation made good sense in its historical context, when the archetypal role for the computer was "automating routine processes". As Brian Cantwell Smith has argued in (Smith 1987, Smith 2002), a foundation for computation in logic may suit programs with a preconceived abstract functionality but is not well-adapted for dealing with the relationships between form and content encountered in modern computing practice. Through its capacity to generate rich experiences, the computer can liberate the imagination, and in principle suggest fertile new modes of interaction that defy preconception.
In acknowledging and exploiting the semantic impact of holistic experience, computing practice has made a transition that our science of computing has not. Trying to give a mathematical account of computing is like trying to account for musical experience solely by music theory. This motivates us to reappraise computing from a totally different perspective in which experience rather than logic has a privileged role.
The objections to this reorientation centre on perceived fundamental distinctions between kinds of experience. In commonsense thinking about computing and the humanities, for instance, we distinguish experience of physical reality, experience of the virtual world, experience that can be communicated — formally or informally — through language, experience that can be authenticated by scholarship or experiment, and affective experience such as is associated with the appreciation of works of art. Attributing an absolute status to these distinctions endorses the familiar fractured caricature of the relationship between sciences and arts, at the ends of a spectrum of experience leading from the material world to the miraculous. Both computing and the humanities have made significant intellectual and practical contributions to challenging the status of these distinctions. Consider, for instance, the ontological issues addressed by Gooding in his discussion of the status of virtual experiments in science, and the analysis of poetic treatments of the metaphysical and the material in Heaney. The alternative vision for computing endorsed by 'Empirical Modelling' is rooted in a philosophical position proposed by William James where the distinctions between different varieties of experience are taken to be no more or less than matters of classification (James). This is potentially significant both in respect of aligning the science of computing with its practice, and in negotiating — and perhaps in due course, consummating — the marriage of humanities and computing.
Human Computing and Empirical Modelling
This section takes up the idea of reappraising computing from a perspective in which experience rather than logic plays a privileged role. This involves turning from the relationship between computing and the humanities as disciplines to consider the more concrete relationship between humans and computers.
Through their enormous flexibility and power, and the ethereal medium of electronics, computers have greatly extended the machine metaphor. The activity of programming allows us to make new 'machines' of extraordinary range and variety. A widespread view compatible with this metaphor sees the computer characteristically as an 'information processor'. Underlying such a 'machine computing' outlook the role of logic is central from the specification to the verification of both programs and hardware.
There is, however, a perspective on computers and their use that is independent of the machine metaphor and more fundamental. It has always been present in computing but has been so over-shadowed by the viewpoint, and usefulness, of machine computing that it has often been overlooked.
Before making any use of the computer I need to be able to relate what I see and do on the computer with my situation in my own world outside the computer. For this I must be able to present a part of my world, or some phenomenon, on the computer in a recognisable fashion. When this is a matter of using the computer in a machine mode (e.g. for e-mail or word-processing) this act of representation is very familiar. But it is now possible to make computer models with which we can deliberately dwell upon our personal understanding of something of interest for its own sake, and without any functional use yet in mind.
This role for the computer of building artefacts with which to think and explore has been facilitated by the improving technological management of the electronic medium. This has become, like paint, or music or language, a medium for self-expression. The fluidity and flexibility of the medium make it a potential match for close integration with the 'stuff' of human thought and perception.
The contrast then, with the machine mode of the computer, is the capacity of computer artefacts to offer us direct, 'felt' experience of parts of our own worlds. It is a 'likeness' established through the correspondence between the experiences, on the one hand, of interacting with our world, and on the other hand, of interacting with the artefact. It is this emphasis on the way computer artefacts may be experienced as if for the first time, then explored and developed before definite meanings have emerged, that is the essence of what we mean by 'human computing'. Computer artefacts themselves now become a significant source of experience, and — especially in terms of the quality of experiential interaction — they may even be offering us a new kind of experience.
Some of the early pioneers of electronic computing had a vision not unlike that of human computing. For example, many of the sentiments of the enthusiasts for electronic analogue computing (Small) resonate strongly with our ideas, and Licklider looked forward to a time when "men and computers would work in intimate association." But in the 1960's the technology made any such use of computers very difficult. Since then spreadsheets have been the most successful software to embody the idea of human computing. It has, however, been the explicit aim of the Empirical Modelling (EM) project at Warwick to develop principles and tools that give priority to experience rather than logic, and that promote the integration between human and computer processes that is at the heart of our vision for human computing.
The Empirical Modelling Project has been pioneered and led by Meurig Beynon at Warwick for over fifteen years. The work has been taken forward in large measure by many cohorts of third-year project students and many research students. The overall guiding principle has been the development of computer artefacts that offer similar experiences, through interaction, to those in some part of the modeller's own world. Fundamental practical concepts that have shaped the principles and the model-building tools are those of observable, dependency and agency. The characteristic activity of EM is the experimental identification of relevant observables associated with some phenomenon and of reliable patterns of dependency and agency among these observables. It is a modelling process that is more primitive than, and so prior to, the commitments inherent in programming. The approach is a broad one having relevance across the whole spectrum of computing. We shall introduce the ideas of definitive scripts and agent-oriented modelling by means of a small example and demonstration, and will give an overview of the on-line material available on EM (EM-website).

Not in the notes: Erlkoenig as a case study in Human Computing

Meurig Beynon
A companion paper (Paper 1) argues the need for a radically different perspective on computing that is particularly relevant to its role in the humanities. A key notion is dispelling the idea of an absolute duality in experience, and reinterpreting computing with respect to distinctions that rest on how experience is characterised. We can understand how this might work by recognising that semantic relations similar to those that arise in computer programming exist in the humanities. The pianist plays Chopin, but the score resembles a program. But where the computer scientist views the program as essentially defined by its precise abstract operational semantics, the musician — whether composer, pianist or analyst — takes a much more liberated view of the meaning of the musical score. The pianist is deemed to play a Chopin sonata, even though there are some wrong notes. Playing Chopin and playing the piano are both human skills that clearly admit no exact ultimate level of attainment, and the counterpoint between the two is a commonplace theme in music analysis and criticism. Particularly pertinent in this context is Mahler's remark that "what is best in music is not to be found in the notes" (Shapiro), and the well-attested fact that Chopin's use of rubato defied precise notation in a score (Schonberg).
A better understanding of the distinction between a musical score and a conventional computer program helps us motivate an alternative approach to computer-based modelling that can do fuller justice to the concept of humanities computing — that of Empirical Modelling (EM). The archetypal computer program is intended for machine interpretation, and is optimised for a specific function and context of use. Though the results of executing the program can be experienced by the human interpreter in the appropriate user role, any human interpretation of the program in execution is in general a most specialised exercise in interpreting machine operation that is of its essence unintelligible within the context of use. What is more, the degree of specialisation and optimisation of the program to function is typically such that the user-oriented interpretation disintegrates on changing the merest detail — all that remains to the programmer is to 'debug' the behaviour of the machine. Contrast the musical score. Though the aspiration of the pianist may be to trace the execution from beginning to end with the strictest adherence to the score, the process of interpretation resembles reading a computer program no more than it resembles reading a piano roll for a player-piano. (Indeed less, since in this analogy a computer program is typically more like a prescription for punching holes in a piano roll.) The pianist may enter the score at any point in time, extract melodic fragments, or adapt the written prescription in order to savour the experience of a particular chord, to shape the inflection of a melody, or identify the essence of a technical difficulty. In this activity, in accordance with Mahler's dictum, the pianist will give ultimate priority not to being in every respect accurate to the score, but to evoking and communicating 'the felt experience'. In the spirit of Turner, the separation between the technical accomplishment and the musical effect is not a sharp duality: the two experiences of playing the piano and playing Chopin are blended in the experience of the human interpreter. The priority that is given to those aspects of the interpretation of the score that are least precisely documented is reflected in the way that we say: "She played Chopin's Revolutionary Study" rather than "She used the piano to execute Chopin's Revolutionary Study." This distinction between stances towards interpretation speaks to a yet deeper tension between the values of the humanities and the method-tool-use paradigm of the business IT culture (EM-website 055).
The principles of EM, and the respects in which they represent a radical departure from conventional thinking about computing with implications for the humanities, can be illustrated with reference to a study in modelling music. For this purpose, our choice of theme is Erlkoenig, as first dramatised in verse by Goethe, then set to music by Schubert, and later transcribed by Liszt for piano solo. The objective for this case study is to show how the application of EM principles and tools is suited in principle to the development of an auxiliary model that can serve a whole variety of different functions for the human interpreter. At present, the construction of such a model is in its earliest stages, but its broad conception can be outlined by drawing upon well-established experience of EM for a wide range of applications (EM-website, EM-archive). An important and characteristic theme of EM that echoes sentiments expressed about modelling by McCarty (2004) is that the potential scope and function of the model cannot be preconceived, nor will the model ever represent more than "a temporary state in the process of coming to know." In this respect, it is crucially different from a conventional program in having no preconceived formal specification, and being intended and open for indefinite extension and elaboration.
The case study has been chosen to highlight a number of key issues: the fundamental significance of the shift in perspective towards the radical empiricist outlook of William James (1996) rooted in the idea that 'one experience knows another'; pertinent aspects of EM from a technical modelling perspective, such as the role for observation, dependency and agency, the scope for invoking concurrent agents in the interpretation, and the merits of EM in respect of combining models; how each of the six varieties of modelling identified by McCarty (2004) can be represented within a single EM model.
The importance of a radical empiricist stance stems from the need to account for a treatment of meanings in the humanities that is far beyond the scope — though not perhaps the aspirations — of the formal semanticists in computer science and AI (Smith 1987). Consider the audacity of the following extract from Maurice Brown's commentary on the Erlkoenig:
Even more remarkable, as was first pointed out by Sir Donald Tovey in a superb programme note on the song, is the treatment of the pianoforte when the child speaks. During the rest of the song we are observers: we watch the ride, we hear the child's voice and the father's reassuring answers. But only the child hears the Erlking, and the rocking, almost lulling, movement of the pianoforte accompaniment is the child's experience of the motion of the galloping horse, the warm protection of his father's arms, while he trembles at the sinister invitation. When he cries out, we revert to observers and the clamour of the hoofs, the rush of the wind, break again on our ears.
(Brown)
In Jamesian terms, both Brown and Tovey are testifying to the experience of a conjunction of two experiences (the texture of the musical accompaniment and the child's perspective on events) for which no formal explanation need be given. It is quite characteristic of such a conjunction that its recognition is to some extent enabled by a purely technical consideration — that these changes in texture come as a enormous relief to the accompanist, so taxing is the pianistic device that evokes the horse's unrelenting ride.
From a technical modelling perspective, Erlkoenig is a rich source of instances of agency, dependency and observation. EM makes use of techniques for distributed modelling (cf. EM-archive: claytontunnelSun1999) and animation (EM-archive: railwayYung1995) that can underpin concurrent engineering (EM-website: 034). The model-building can be framed with reference not only to the various perspectives of external agents (in this context, the poet, the composer, the singer, the accompanist, the translator etc) but also those internal to the drama itself (the father, the child, the Erlking, the horse). A vital aspect of EM is that model construction is not compromised by optimisation to performing some specific function, as in conventional programming, so that blending of models is pervasive, and there is openness to extension possibly even in the light of subsequent developments in tools and technology (cf. the new pianistic possibilities explored by Liszt in his transcription of Erlkoenig).
The status of EM as a radical generalisation of modelling with spreadsheets makes it possible to envisage a role for modelling extending that illustrated by McCarty in his Analytical Onomasticon to Ovid's Metamorphoses (2005, Chapter 1). Musical counterparts for the analogy, representation, map, diagram, simulation and experiment can be found in modelling Erlkoenig and identified in EM. Of particular interest is the combination in the context of a music of formal and informal semantic frameworks. One might for instance seek an authentic virtual reconstruction of an early performance of Erlkoenig as Schubert himself might have heard it (cf. Beacham), or wish to elaborate on the semi-formal analysis of musical language of Erlkoenig that Cooke initiates in (Cooke). A precedent in EM for combining formal and informal semantic ingredients within a single model can be found in (EM-website: 051).

Towards a philosophy of modelling for humanities computing

Meurig Beynon and Willard McCarty
In developing a persuasive philosophical stance on humanities computing, the first task is to relate its aspirations to the current vision of computer science. In (Paper 1), Beynon and Russ propose that an alternative science of computing is needed to bring computing and the humanities into a more fulfilling relationship. McCarty (2004) identifies a better understanding of "what modelling is" as key to making sense of humanities computing. This paper — to be read in conjunction with (McCarty 2004) — revisits McCarty's arguments in the context of the critique of traditional thinking about computing motivated by the study of Empirical Modelling (EM) (Paper 1).
Informally, McCarty's Onomasticon (McCarty 2005) may serve as an archetypal example of EM. Though it has been built using commercial spreadsheet and database software, rather than the special-purpose tools that have been developed for EM (EM-website), its development exploits the essential principles and concepts of EM. It is characteristic of this development that (to paraphrase McCarty 2005) the Onomasticon, however finely perfected, is better understood with reference to temporary states in the process of coming to know rather than a fixed structure of knowledge. In thinking of the Onomasticon in EM terms, the term model on the computer is preferred to McCarty's computational model. The principal reason for this is that the way in which EM views the semantics of the Onomasticon is quite different from what is understood by the computational semantics of the underlying computer program (cf. Cantwell Smith's discussion of semantic relations in Smith 1987). Specifically, the manner in which the EM model (the Onomasticon) represents the referent (Ovid's Metamorphoses) is that there is a repertoire of 'atomic interactions' that the modeller can make both with the model and with its referent and that these are perceived by the modeller (McCarty) to connect the experience of the model with that of its referent.
As McCarty's careful analysis of terminology (McCarty 2004) indicates, the dynamic and provisional quality of the model argues against describing the model as 'a representation' of its referent. For reasons discussed at length in (EM-website: 078), the terminology that William James introduced in considering relations between experience is preferred: "experience of the model knows experience of the referent". It is to be understood that the modeller will never be obliged to 'explain' why one experience knows another experience, nor to make any claims for the objectivity of this perceived relationship. This is the essence of James's Radical Empiricism (James), that relations between experiences are themselves given in experience.
Though it is accepted usage to refer to the spreadsheet as a model of a financial situation, this is not the sense in which model is most commonly used in computer science. Expressions such as model-checking, model-based reasoning, mathematical model allude to far more abstract semantic relations that are by no means directly apprehendable in experience. When we conceive a model as a set of logical equations or constraints, the manner in which the model is experienced is outside the semantic scope. Invoking the alternative semantic framework of EM entails being more discriminating about kinds of computing activity, and motivates a reappraisal of what McCarty (2004) identifies as the "decisive criteria" for modelling by computer: complete explicitness and absolute consistency and manipulability.
Where consistency is concerned, it must be recognised that the experience a computer generates is not explicitly specified in every respect — at any rate not in the same sense that an abstract computation is explicitly specified. EM focuses on the experiential aspects of computer-based models, for which — as is appropriate for humanities computing in general — no presumption of complete explicitness and absolute consistency in informal semantics is required. Indeed, in (EM-website: 072), Beynon makes the case that the semantic framework of EM is aptly suited to dealing with situation, ignorance and nonsense ("the principle of SIN"). For this purpose, it is not the linguistic and logical frameworks supplied by Chomsky and Tarsky or the syntactic treatment of metaphor in logicist AI that are appropriate (EM-website: 050), but semantics closer in spirit to the thinking of Lakoff and Turner.
Where manipulability is concerned, it may seem that we can manipulate representations effectively using a computer because we can modify programs. The notorious difficulty of adapting conventional programs to meet new requirements is evidence that this contention cannot be taken at face value. And where "one experience knows another" is concerned, there are serious conceptual and practical objections to deeming the common debugging cycle (as in "stop execution of program P, fix line 235, recompile, run program P 'to the same point as it was before' — whoops ... I've introduced another bug — etc etc ...") to be an atomic transition in experience. In practice, manipulability is bound up with contextual and pragmatic issues that are entirely alien to the formal semantics of computation. This is consistent with McCarty's observation that "manipulation ... requires something that can be handled [in] a time-frame sufficiently brief that the emphasis falls on the process rather than its product" (McCarty 2004). For this purpose, the notion that "the experience of adjusting the computer model should know the experience of adjusting the interpretation of the referent" is precisely what is required.
The decisive emphasis of EM is on what is known in immediate experience, and what in William James's terms is associated with "the most intimate conjunctive relation .... that experienced between terms that form states of mind" (James 44-45). Within this apparently limited frame of "what experience knows another in-the-now" all kinds of conception of model are possible through assuming different kinds of context, observation and agency. This is the very subject of James's Radical Empiricism. James develops the story of knowledge to deal with expectations of what has not been experienced — knowledge that transcends direct experience. The rich quality of engagement with past and future experience that this demands is well-represented in EM, both in the characteristic inflection of the "what if?" interaction, and the capacity to replay the entire process of construction as one might in exposing the sequence by which the cells of a spreadsheet came to be defined. This facility, frequently exploited in presenting EM models, captures the aspiration for modelling identified by Dening — "[that we may] return to the past the past's own present, a present with all the possibilities still in it, with all the consequences of actions still unknown".
In appreciating the shift of perspective in EM fully, it is vital to distinguish the semantics given in experience in a state-of-mind from semantics based on behaviours (as in program semantics (Smith 1987)) — even when these are guided by experience (as in Turner's treatment of narrative (Turner), and CantwellSmith's discussion of "the process semantics" (Smith 1987)). This is evidenced by the diversity of contexts behind the wide range of applications for EM (EM-website), and the associated diversity of models. As is illustrated in (Paper 2), EM can be used to generate just such rich varieties of model — analogy, experiment, simulation, map, diagram, representation — as are catalogued in McCarty (2004). This diversity is enabled precisely because an EM model is identified by a state and a body of latent anticipated interactions that can be more or less familiar and significant to the modeller, or any other human interpreter, and in this way serves as an interactive environment whose meaning is constrained only by the imagination. This delivers more than is envisaged by Minsky or Naur in respect of 'constructed models': beyond the confirmation of a theory, a place for "blind variation" in the sense of Vincenti — interaction "without complete or adequate guidance" potentially leading to discovery.
Several intriguing philosophical connections identified by McCarty (2005) are ripe for further scholarship and exploration. The suggestive links between EM thinking and the phenomenology of Polanyi and Heidegger echo the phenomenological interpretations of software development offered by Winograd and Flores, but also argue against invoking such interpretations in relation to traditional software practice. Of crucial importance in ensuring the universality of the concept of modelling, and embracing activities that involve creation and discovery, is the ontological status of the model, the referent and the relation between them. The idea of an EM model as a construal invokes Vaihinger's "as if": neither true nor false, but as construed for the purpose in hand. Such a stance even underwrites propositions such as "our constructions continue to work, no matter how violent the changes in scientific opinion may be" (cf. McCarty 2005) that might be seen as authorising absolute claims for EM models as physical artefacts. This outlook accords with James's contention that "subjectivity and objectivity are affairs not of what an experience is aboriginally made of, but of its classification" (James 141), and his perspective on the difficulties of understanding the direct products of experience: "But how the experiences ever get themselves made, or why their characters and relations are just such as appear, we can not begin to understand " (James 132-133). The pragmatic importance of this ontological stance for humanities computing is that it helps to dispel the mystique that surrounds high art and hard science: a mystique that is the pretext for divisive absolute partitions in experience.

Bibliography