Some thoughts on Digital Humanities in Norway

Espen S. Ore, Dept. of Linguistics and Scandinavian Studies, Unit for Digital Documentation, University of Oslo

1. Introduction
While there is always new development, new ideas and methods appear, there is continuity in Norwegian Digital Humanities that goes back to the days of “Computing in the Humanities” in the 1970s. In 1972, the Norwegian Computing Centre for the Humanities was created by the Norwegian Research Council (then the NAVF – Norges Almennvitenskapelige Forskningsråd, now NFR – Norges Forskningsråd). This was a national institution located at the University of Bergen while there were consultants at the three other universities then existing in Norway. From 1973 to 1991 the Centre published the journal Humanistiske Data, which is available as PDF facsimile today[1] and is an interesting source for the history of Norwegian Digital Humanities. When one is looking through the issues of this journal it is easy to see that for the first years much work was concerned with building archives. But Computing in the Humanities was not limited to that field. Additional work was done with statistical tools and the preparation of data sets for such tools. At the center in Bergen much emphasis was placed on multimedia and hypermedia from the second half of the 1980s. This was partly based on the assumption that scholarship as well as teaching in the Humanities was concerned with more than just textual and numerical data. The Norwegian Computing Centre for the Humanities as a national institution ended in 1992; instead, it became part of the University of Bergen, and since then there has been no national institution for Digital Humanities or Humanities Computing in Norway.

As a parallel institution to the Norwegian Computing Centre for the Humanities, the Research Council established a Social Science computing center (Norsk Samfunnsvitenskapelig datatjeneste – NSD – Norwegian Social Science Data Services), also located at the University of Bergen and now an independent national institution.[2] Although this center is mainly aimed at providing data and tools for studies in the social sciences, it is also used by historians.

Already back in the 1980s there were field specific subdivisions within the overall field of Humanities Computing. Computational Linguistics soon began to follow its own path of development, as was indeed an international trend. This was made very clear during the work that led to the conference “The Future of the Humanities in the Digital Age”[3] and the book “Computing in Humanities Education”[4], the Socrates/Erasmus thematic network project ACO*HUM during which the representatives from the Computational Linguistics Community found that there was no common ground between their field and the rest of the Humanities disciplines. Other disciplines in the Humanities have built their own scholarly traditions and methodological toolboxes to such a degree that it may at times be difficult to view it all under one single umbrella. There are however some overlapping problems and methods. In the following I will look mainly at text related studies, especially related to free text/natural languages, but I will also consider some of the development in Norway on tabular text data, material culture and more.

2. Digital Humanities in Norway from around 1990
History is a discipline that uses tools common to both the Humanities in general and to the Social Sciences. From the early days of Humanities Computing in Norway tools were developed for entering and storing tabular data[5] which could then in a next step be analyzed with statistical tools such as SPSS.[6] Some of the early data registration projects done for historical research was old census data. The first “complete” – meaning that all persons including children and not only tax payers or possible soldiers were included – Norwegian census, the one from 1801, was also the first one to be computerized. This project led by Jan Oldervoll at the University of Bergen led to further work on digitizing Norwegian census data, and now this data set has later been refined and is available along with other old census data.[7]

More or less structured data were also important in linguistic studies and to a certain degree in literary scholarship. The 1970s were in Norway the starting point for construction of and work with text corpora. These are tools that are very much still with us today, and their use has moved into other disciplines: It is, for instance, difficult to imagine serious lexicographical work being done without text corpora.

Throughout the 1980s and in the first years of the 1990s text archives containing free, running text (as opposed to structured, lemmatized text chunks typical of text corpora used in linguistic research) became more important. An early and important international influence came from the Thesaurus Linguae Graecae (TLG) then being constructed at the University of California, Irvine, where it is still maintained.[8] The TLG developed the so-called Beta code which was developed to make it possible to store classical Greek in a 5-bit character encoding system. The Beta code is still used within the TLG; it is thus one of the oldest encoding systems developed for Humanities computing still in use.[9]

From the late 1980s the idea of digital critical text editions became important also in Norway. In Germany, for example, much work had been done in this area, such as the development of the TUSTEP tools led by Professor Wilhelm Ott in Tübingen[10] – these tools were for instance used by Hans Wilhelm Gabler in his synoptic edition of James Joyce: Ulysses[11]. In Norway, two large scale projects started in 1990: the electronic edition of Ludwig Wittgenstein's Nachlass at the University of Bergen[12] and the Documentation Project (Dokumentasjonsprosjektet), a national project with project leader and project administration located at the University of Oslo.[13] While the Wittgenstein project published one author's papers, the Documentation Project had a much wider scope: on the one hand it digitized museum data, acquisition catalogs, photos and other variants of meta data and digitized data, something that also implied the construction of database models which are still in use. Additionally, the project also stored encoded (SGML) transcriptions of literary and documentary text sources. These texts have later been re-encoded into XML for the most part. Both projects have made an international impact in methodological development. The Wittgenstein Archives (WAB) at the University of Bergen developed its own encoding system, MECS (Multi Element Coding System) since this happened at a time when SGML seemed the only option and the TEI (Text Encoding Initiative) had not yet released its first version (TEI P1).[14] After the CD-ROM publication of the Nachlass in 2002, the archive was transformed into XML/TEI. But it is worth noting that when XML itself was under development, the idea of well-formed documents (as different from documents valid according to a DTD or schema) was taken into XML from MECS.[15] The Documentation Project not only built a large archive of literary and documentary texts. It also created models for storing data such as acquisition data from archeological museums. The Documentation Project and its later incarnations, such as the Unit for Digital Documentation at the University of Oslo, participated in the ICOM/CIDOC work on a conceptual reference model, the CIDOC-CRM[16], which is now an ISP standard. The CIDOC-CRM has also influenced work on the international bibliographic standard FRBR, and as a result this has produced the FRBR-OO in collaboration with the international library organization IFLA.[17]

The Wittgenstein edition and the work done in the Documentation Project are also important for the next development on critical and/or scholarly editions which first manifested itself in 1998 when work was started on the project Henrik Ibsen's Writings (HIW).[18] As we will see later, one of the important features in Norwegian and Nordic large scale projects producing digital editions is collaboration. This collaboration is found both on a national level between institutions and projects, but also on a Nordic level. Back in the 1970s the Computing Centre for the Humanities at the University of Bergen started work on a concordance to Henrik Ibsen's plays and poems.[19] In the early 1990s the University of Oslo produced a collection of digital facsimiles of all of Ibsen's manuscripts and letters that were available at that time – mainly at the Royal Library in Copenhagen and at the National Library of Norway.[20] When the new Ibsen edition project HIW started, much of the theory behind the edition was influenced by the work on the new edition of Søren Kierkegaard's writings at the University of Copenhagen[21], and all the transcriptions from the concordance project in Bergen and the facsimiles from the (separate) facsimile project were made available for this new Ibsen edition.

The Nordic collaboration was formally recognized when the Nordic Network for Edition Philology (NNE) was established in 1995.[22] This network is concerned with editions of modern texts in general, but there are links to networks working on medieval texts. And within the NNE an informal Special Interest Group (SIG) for electronic/digital editions is working and organizes its own workshops and conferences. This network and the tradition of sharing data and knowledge can be seen in the further development of digital editions: People who worked on the Ibsen project are now working on other large scale projects such as the publication of the painter Edvard Munch's writings (he left a large corpus of written material)[23], and we see international collaboration such as in the publication of the Norwegian-Danish author Ludvig Holberg's works.[24] In Norway, the National Library on its own produces searchable facsimiles of all of its printed books (some of this work is internationally available, but some of it is only locally available due to copyright reasons), and it hosts digital collections. One of these collections is, which originally was established by the Norwegian Society for Language and Literature (NSL). published critical editions under the banner of NSL and other quality-approved digital texts.[25] The National Library also hosts the collection of Norwegian language data Språkbanken, whose material is mainly aimed at linguistic research and developers in language technology.[26]

3. Some recent and/or ongoing projects
Digital editions and text corpora have a long tradition in Norwegian Digital Humanities. A new generation of scholars has been growing over the years, and at the University of Oslo a toolbox for lexicography has been developed with a background in the department of Linguistics and Scandinavian studies. It is in daily use in the large-scale project Norsk Ordbok (Norwegian Dictionary) 2014[27], and some of the tools have been used also for lexicographical work in Zimbabwe and other African countries.[28]

At the Department of Philosophy, Classics, History of Art and Ideas (IFIKK) at the University of Oslo the PROIEL (Pragmatic Resources in Old Indo-European Languages) project ran from 2010 to 2012.[29] This project resulted in both a multilingual corpus and tools for building a corpus and for grammatical analysis and search and retrieval of texts from the corpus. A separate project working on medieval Norse material, MENOTA (Medieval Nordic Text Archive)[30] established in 2001 has now used the PROIEL tools for a separate Norse grammatical tagging[31] while it also keeps its own TEI-based archive, which in many ways also is an edition of the Norse manuscripts.

Scholars working on various aspects of musicology were among the early users of computers in Norway. Today we find projects spanning from technological tools for analysis of sound/music to a large-scale national project concerning the musical heritage of Norway, Norwegian Musical Heritage.[32] As we could see in the case of large-scale literary projects, here too we find a project involving multiple institutions and not only universities, but also for instance the National Library of Norway. This project uses MEI (Musical Encoding Initiative) encoding, and this may in many ways be compared to the general use of TEI for literary projects.

4. Digital Humanities as an academic degree?
Humanities Computing was established as a one term (semester) course at the Faculty of the Humanities at the University of Oslo around 1980 and this course continued until 1998, when it was integrated with other courses at the Department of Informatics at the Faculty of Science. At the University of Bergen in the late 1970s a Department for Information Science was established at the Faculty of Social Science, but aiming also at students at the Faculty of Arts. This department moved, however, more in the direction of Social Sciences over time. The University of Bergen also established a Department for Humanities Computing. This was later transformed into a Subdivision for Digital Culture which is flourishing as part of the larger Department of Linguistic, Literary and Aesthetic Studies. But while this sub-department is well established and producing good scholarly work, it is not so much a department (or sub-department) for Digital Humanities as such, but more a place where (as the department title suggests) digital culture is the object that is studied. In addition to academic departments such as those listed above, some Digital Humanities tools (such as digital editions and text encoding) have at times been taught as parts of curricula. What still seems to be lacking are departments and degrees centered on Digital Humanities.

5. Digital Humanities – a Norwegian revival
As described above, there was a national Norwegian Center for Computing in the Humanities (NCCH) that existed from 1973 to 1991. From the late 1990s there seems to have been a low interest – at least at national or institutional top levels – for general Digital Humanities. This integrative approach was substituted by various discipline-specific developments and projects. But with a growing interest in Digital Humanities also on an institutional level, things seem to be changing now (June 2014). In the 1970s much of the work done by the NCCH was of the “evangelizing” kind, giving introductions to the possibilities opening up by computer-based tools to scholars mainly used to working with pen and paper. The continuity mentioned at the beginning of this essay is also supplemented by fundamental changes in the place of the computer in Humanities research and teaching. While computers now are involved some way or another in all the scholarly work being done in the Humanities, there is still a division between those who use computers mainly as a combination of writing tools and reference works and those who use IT actively as part of their research – whether for statistical tools, text editions, geographic information systems (GIS), or something else.

In June 2013 the seminar “What are Digital Humanities?” organized by Annika Rockenberger at the University of Oslo[33] included presenters from other Norwegian and Nordic Universities as well as from further abroad. One of the results of this seminar is the establishment of a cross-discipline Digital Humanities Network at the University of Oslo, which also aims to link up with other national and international groups. A “Centrum för digital humaniora” (Center for Digital Humanities) was recently established at the University of Gothenborg, Sweden, and there are hopes that something similar may appear at one or more universities in Norway. And since we can see a growing trend towards Digital Humanities centers and networks in other countries and towards regional associated organizations being established under the umbrella of the European Association for Digital Humanities[34] (such as the German language based “Digital Humanities in deutschsprachigen Raum”[35]), we may also see a Nordic Digital Humanities network being established in the not too distant future.

[1] Humanistiske Data, ISSN 0800-6792, 1973–91, University of Bergen. For the facsimile, see <> (accessed 15.10.2014).
[2] See <> (accessed 15.10.2014).
[3] The Future of the Humanities in the Digital Age – Abstracts: ISBN 82-994823-0-5, University of Bergen 1998.
[4] De Smedt Koenraad et al. (eds), Computing in Humanities Education – A European Perspective, Bergen 1999.
[5] One popular tool was RUBREG, see <> (accessed 15.10.2014).
[6] See <> (accessed 15.10.2014).
[7] See <> (accessed 15.10.2014).
[8] See <> (accessed 15.10.2014).
[9] At the University of Oxford, there are tools for converting data encoded in COCOA (with roots back to the 1960s) into modern TEI, see for instance <> (accessed 15.10.2014).
[10] See for instance <> (accessed 15.10.2014).
[11] See for instance <> (accessed 15.10.2014).
[12] See <> (accessed 15.10.2014).
[13] See <> (accessed 15.10.2014).
[14] See <> (accessed 15.10.2014).
[15] This statement is based on personal communications from Claus Huitfeldt, then leader of the WAB at the University of Bergen and Michael Sperberg-McQueen, co-editor of the XML 1.0 specifications in 1998.
[16] See <> (accessed 15.10.2014).
[17] See <> (accessed 15.10.2014).
[18] See <> (accessed 15.10.2014).
[19] See <> (accessed 15.10.2014).
[20] See <> (accessed 15.10.2014).
[21] See <> (accessed 15.10.2014).
[22] See <> (accessed 15.10.2014).
[23] See <> (accessed 15.10.2014).
[24] See <> (accessed 15.10.2014).
[25] See <> (accessed 15.10.2014).
[26] See <> (accessed 15.10.2014).
[27] See <> (accessed 15.10.2014).
[28] See <> (accessed 15.10.2014).
[29] See <> (accessed 15.10.2014).
[30] See <> (accessed 15.10.2014).
[31] See <> (accessed 15.10.2014) (requires username/password which can be registered automatically).
[32] See <> (accessed 15.10.2014).
[33] See <> (accessed 15.10.2014).
[34] See <> (accessed 15.10.2014).
[35] See <> (accessed 15.10.2014).

Some thoughts on Digital Humanities in Norway, in: H-Soz-Kult, 13.11.2014, <>.
Veröffentlicht am
Weitere Informationen
Sprache Beitrag