OSCE Deckers Paper

From The Digital Classicist Wiki
Jump to: navigation, search

(Position paper at the OSCE Programme workshop, September 2016. Saved here for archival purposes. Please do not delete.)

Some thoughts on authority and peer review

Daniel Deckers

After months of thorough research, a scholar adds a surprising new reading to a digital edition. The next day, a frequent user of the platform where this edition resides wishes to contribute in a small way and corrects what he believes is an obvious error in the data entry (just why didn't he spot it before?).

While this scenario is arguably an exaggeration, and the particular case may seem to be one of the more easily averted problems with an open digital edition, it is an example of what worries we will have to address to win acceptance for such editions.

Initially, there will probably be two main approaches to create a corpus of open digital editions in our field. One will be to convert existing (older?) editions into an electronic version. Assuming unambiguous encoding guidelines are adopted, the obvious main tenet for quality assurance is minimising the number of errors of transmission from the printed text to the electronic one. As with other electronic texts created from existing printed books, rather simple, wiki-like approaches might probably work, and there is going to be little contention as the printed text would be a decisive reference in these cases (but see below).

The other will be to create newly prepared critical editions as electronic editions in the first place. For this, a traditional group of scholarly editors could just use their established routine, creating a more or less authoritative edition as the case may be, foregoing possible advantages of the digital medium, yet ensuring a high standard of research is reflected in the edition. In this case, we are unlikely to see substantial improvement by the casual reader, and the number of specialists who could make meaningful additions right from the publication date is going to be rather small.

However, as we aim to have editions that evolve, and as we are not going to contend ourselves with just duplicating what could be done in the era of the printed book, the most interesting questions will be raised by how we can integrate more casual contributors efficiently on various levels without sacrificing quality and scholarly integrity of our output.

Let's try to classify possible contributions to an evolving digital edition:

  • correction of typographical errors (which might have been either introduced in transfering material to the digital medium or have already existed in a previous printed edition)
  • correction of errors in interpreting previous materials (e.g. incorrect encoding of an older apparatus, misattribution, etc.)
  • new readings from a manuscript not previously included in the edition
  • new additional materials (commentary, translations, etc.)
  • updates or corrections to existing digital material (improved readings, new conjectures, more accurate commentary, more elegant translations, etc.)

For all of these categories but the first, we have the problem that anything based on interpretation is open to same, and that some of the work required presupposes access to and familiarity with source materials that will not necessarily all be freely available, and thus the results not easily verifiable even by those select few with the necessary skills.

While we would want to have corrections contributed by all willing to, and new materials by those able to, we would want updates to the existing material from those qualified to make the necessary judgements.

We cannot, therefore, use a completely egalitarian approach, i.e. the unadulterated wiki principle. While this might help with typographical errors (though would you trust to this when your crucial new theory could be thwarted by a spelling difference?), its use in the other areas would have to presuppose that every contributor would use extreme restraint in only adding and changing things within his specialised competence. Even if we added a voting system (as suggested by previous speakers) to judge changes (or the likelihood of particular variant readings, as has been suggested), I doubt that mere numbers will be suitable in arriving at the right conclusions.

Must we therefore limit ourselves to a strictly hierarchical approach and install an editorial team to supervise our corpus of digital editions? While such a team would have to be a cross between a group of scholarly editors such as found in large-scale editorial enterprises at academies of sciences and a board of editors as with a scholarly journal, and it could conceivably be rather efficient when the corpus is limited to a very specific field of ancient authors, such a solution is hardly envisionable if we really aim to have a large number of editions ranging from the most Classical texts to the Byzantine.

What then are the instruments to assure quality and yet go beyond traditional structures? One answer lies in the encoding itself: Since we aim to encode variants, we can also encode modern day variants, i.e. encode the opinions of several modern contributors. The question of quality then also becomes one of authoritative version(s). There are two ways of backing the quality of encoded alternating interpretations (or readings, corrections etc.), one in the authority of their contributor(s), the other in the authority of those that support them. In both cases, the authority can be either extrinsic, e.g. rest on the esteem a scholar or group of scholars is held in, or intrinsic, i.e. based on some system of evaluating the trustworthiness of other contributions by a particular individual.

It would appear what we need is a system to map the authoritativeness of experts in our fields onto a user management system. While very finegrained approaches are theoretically possible, I cannot see that more than a rather simple classification of which user is an expert in which particular subfields would meet with acceptance. This, combined with a system of letting users evaluate the usefulness of contributions (i.e. a "voting system") that weighs the votes according to authority, and possibly feeds back to the authority value for the contributor, might already be enough checks and balances to keep our editions useable. Only time will show.

Since we do need to ensure that quotes from our editions can be referenced unambiguously, we can also easily have the option of "freezing" certain versions of them. Among such milestone versions we'd obviously have the original edition, whether a transcription of a printed one or a newly created one, as well as those that are the result of planned addition of new materials in the context of specific projects. Moreover, to account for more accidental improvements through various small contributions, there could also be periodic milestones. So as not to have these be mere snapshots, we must presuppose, again, some kind of editorial team that decides what accumulated materials should be committed to this new authoritative version.

This latter approach could of course be combined with voting mechanisms and the authority system, and thus perhaps automated to a degree over time. Add discussion forums and similar to the platform for our corpus of editions, and changes that are found to be potentially dubious (or potentially benefical) might be marked for discussion and then in some way of vote decided upon for the next milestone of an edition.

While most of these thoughts may seem to presuppose a single repository and platform in a single place, in fact I believe this can and will be extended to much more distributed systems. We do, however, need a more or less universal system of judgement of authority (though concurrent systems are envisionable, and I leave the interesting effects this would have on definitive versions on texts to the reader). At the same time, we will need ways of ascertaining that texts with the information on contributors are intact and integral. I propose both requirements (one on the user, the other on the server side) might be met by systems of credentials based on somewhat similar principles as the web of trust used for public-key cryptography.

Personal tools