LIVING WITH THE GUIDELINES

THE FIRST TEI EUROPEAN WORKSHOP OXFORD UNIVERSITYCOMPUTING SERVICE 1-2 JULY 1991

 

Donald A. Spaeth

 

The first European workshop of the Text Encoding Initiative (TEI) was held in Oxford on 1-2 July 1991. The TEI is an international effort to develop and disseminate guidelines for encoding and exchanging machine-readable texts. The first phase of the TEI was completed by the publication of the Guidelines for the Encoding and Interchange of Machine-Readable Texts (1990), and several Working Committees, Working Groups and Affiliated Projects are now expanding and refining these guidelines.

The two-day workshop was attended by fifty people from fourteen countries, of whom the largest number were from Britain. Not surprisingly, linguistics and language studies were the best-represented subject areas, but by no means the only ones. The workshop was taught by the TEI Co-Editors, Lou Burnard and Michael Sperberg-McQueen, and by Elaine Brennan (Brown), Harry Gaylord (Groningen) and Terry Langendoen (Arizona). The speakers had obviously worked very hard, and the two days went without hitch.

The workshop included a neatly-balanced mixture of group discussions, lectures on technical issues and software demonstrations and practicals. It opened with a warm-up session, Why Tag Texts?. The group looked at the workshop's core text, portions from Mary Robinson's 'Thoughts on the Condition of Women' (1799), both in its original printed format and as keyed in by the Brown Women's Writers Project and marked up by Michael Sperberg-McQueen. The task was to identify textual elements which should be marked up, and this raised a number of issues. There was some disagreement between those who believed every descriptive variation should be encoded, including the breadth of vertical lines and whether left or right quotes were used, and those who did not. Joy Jenkyns (Oxford) pointed out that typography could hold clues to interpretation, for example, the similarity between a long 's' and an 'f' might point to a sight-rhyme between 'wise' and 'wife'. Jeremy Clear (OUP), tongue firmly in cheek, deployed the 'reductio ad absurdum' argument that we should mark-up an upper-case 'I' as 'a vertical line with two serifs'. (In the closing session on Tuesday, John Dawson (Cambridge) suggested that only elements which were to be processed by computer needed tagging, and there was general agreement that it would be desirable to accompany documents with digital images of the original).

We returned to group discussion of tagging after lunch, in a session entitled Textual Anarchy: The Challenge for the TEI. Lou Burnard had chosen examples of texts from those held in the Oxford Text Archive and had attempted to replace the tagging scheme used in the original with TEI tags; the examples came from the Paston Letters, a blues lyric and Beowulf. Our task was to match the features tagged in the two versions; extra points were awarded for observing elements which had been marked up incorrectly or which the TEI could not mark up. The session did an excellent job of pointing out why the TEI is necessary, since each example used its own idiosyncratic scheme, although everyone was too embarrassed to admit to having scored the most points!

The TEI technical presentations included reviews of basic SGML and TEI concepts and an exposition of advanced TEI features. The reviews were overly brief and schematic, containing nothing that was new for readers of Draft 1 of the TEI Guidelines (TEI P1) while offering insufficient guidance for novices; the mixed experience of the audience made it difficult to judge the right level for these sessions. Terry Langendoen's paper on advanced features explored techniques for encoding linguistic feature structures and for abbreviating verbose coding by defining thousands of entities.

I found particularly useful the analogy he drew between feature structures and relational database tables, since as an historian I need techniques for marking up record structures in text, and the techniques he was describing (and still developing) clearly had applications outside the field of linguistics.

The practical sessions answered the common complaint that there is little software for preparing and analysing TEI-conformant texts. Lou Burnard briefly outlined the software choices and the issues to be considered in choosing software, distinguishing between Parsers, Editors, Filters, Formatters and Retrieval Systems. Two sessions, Uses for Tagged Texts and a TEI Users' Forum, demonstrated examples of several of these types of software. Filters or transducers provide a means of converting other systems of tags into SGML or vice versa. Examples included a Nota Bene program which converted SGML tags into NB formatting; KEDIT macros converting SGML tags into COCOA tags for analysis with micro-OCP; and the B-0:40am Text follows Transducer. Filters are useful for converting already-tagged text into TEI-conformant text but for new texts SGML editors have the advantage of validating texts automatically as they are tagged. Two hands-on practicals gave us the opportunity to try out two editor/parsers, Mark-It (DOS) and Author Editor (Macintosh), and we saw how the latter enabled SGML tags to be used as stylesheets to produce formatted output. On the retrieval side, we saw a simple SPITBOL program produce a list of all proper names in the core Robinson text as well as a prototype of the Oxford Textual Analysis System under development by OUP. Also on show were: Collate, a program for collating variant versions of manuscripts, which now takes TEI-tagged text as input or output; and RUTH, an editor which allows the user to tag texts using a KWIC concordance. The Users' Forum included reports on the forthcoming Chadwyck-Healy CD-ROM database of English poetry, to be distributed with TEI-markup; and the Wittgenstein Archives, which is developing its own distinctly non-TEI markup and analysis software.

The workshop closed with a talk by Michael Sperberg-McQueen on TEI-conformance and a general discussion on the TEI and the workshop itself. Michael Sperberg-McQueen drew a distinction between the different formats in which textual data might be held: at data capture, for a specific application, as stored on a local computer, and for interchange. Ideally, text should be held in a single format which can be understood by many applications rather than in a different format for each application. In a change from the Guidelines, he announced that the TEI would in future distinguish between TEI-conformance þ restricted to SGML but allowing all local characters þ and TEI-interchange format þ using the subset of ASCII defined in the Guidelines. He outlined several desiderata for software, including minimal tag redundancy, allowing attributes to be used to differentiate variants of a tag, and selective display, so that the user can turn off selected tags or elements for viewing.

In the final session, participants expressed concern about the cost and complexity of marking up text with SGML. It was claimed that the costs of data definition, data entry (including training), and storage, particularly given the verbosity of SGML, put it beyond the reach of many publishers and projects, and perhaps all but large government-funded projects. In expressing concern about complexity, it was clear that a number of participants were daunted by the prospect of wading through the Guidelines and SGML manuals. Several participants argued that compendia were needed which identified tags relevant to each subject area, although Michael Sperberg-McQueen said that it was too early to prepare these since the TEI was still under development.

These anxieties about TEI markup suggest that future workshops must devote more time to the practical issues of developing Document Type Definitions (DTDs) and marking up texts from participants' own research. One person observed that we had not examined Document Type Definitions although we had been told that that they were a crucial part of a TEI-conformant text, not least because they document the tags used. In fact, the booklet 'An Introduction to TEI Tagging' which was given to all those attending, provides a suitably gentle introduction to tagging, as well as a sample DTD used to mark up the Robinson core text. However, even this DTD, described as 'a simplified TEI document type description', is eight pages long and includes 85 element tags. Perhaps future workshops should use this more explicitly as a workbook in a practical session replacing one or more of the software sessions. Training of this sort is crucial if TEI recommendations are to be follow widely.

Running a workshop for an audience mixed both in discipline and experience is difficult. The use of Robinson as a core text was an effective device, but there was always the risk (particularly in the opening session) that people would think that this laid down what must be marked up for TEI-conformance. On the contrary, each scholar will only mark up the elements which he or she wishes to study. This is why it is important for workshop participants to be given an opportunity to tag their own texts under supervision.

This raises the broader question of how prescriptive the TEI should be. While some participants expressed concern that the TEI was too prescriptive, others pointed out that users could develop their own idiosyncratic attributes, once again creating an obstacle to free interchange of data. Should the TEI (with help from subject-specific working parties) develop increasingly detailed descriptions of document types and tags (including even subject-specific DTDs) which all scholars will use? Or should scholars be left free to develop their own tags and DTDs based upon their research needs, with SGML-conformance providing a mechanism for documentation and therefore easing exchange? The latter approach is of particular relevance for subjects relatively new to text-based analysis, such as history, but TEI compendia and training will be needed.

I found the TEI Workshop both enjoyable and stimulating. It is hard to see how much more could have been packed into two rich days, which included a reception given by the CTI Centre for Textual Studies and an evening punting on the River Cherwell! I was pleased to see that the SGML and TEI communities are so healthy, and hope that this will be only the first of many TEI workshops in Europe.

 

Donald A. Spaeth, Computers in Teaching Initiative Centre for History with Archaeology and Art History, University of Glasgow, 1 University Gardens, Glasgow G128QQ, UK.

 


Innholdslisten for dette nummeret  Hovedside, Humanistiske Data Hjemmeside, Humanistisk Datasenter