The IHR Blog |

Author Archives: jonathanblaney

More project case studies available


We’re very pleased that all of the project case studies are now available to read online. We posted before about the first five case studies when we added them to our institutional repository. Now the other five have joined them.

Searching for Home in the Historic Web: An Ethnosemiotic Study of London-French Habitus as Displayed in Blogs by Saskia Huc-Hepher. Saskia examines blogs by French people living in London, how their attitudes to the city change over time, and how those changes are reflected in the text and imagery of their blogs.

Capture, commemoration and the citizen historian: digital shoebox archives relating to PoWs in the Second World War by Alison Kay. Alison is interested in the way that personal archives collected by citizen historians may be studied via web archives.

A History of UK Companies on the Web by Marta Musso. Marta describes how she looked for evidence of the way that UK companies took their first, tentative steps in establishing websites.

The Online Development of the Ministry of Defence (MoD) and Armed Forces by Harry Raffal. Harry studies changes in the MoD websites over time, both in terms of their emphasis and the degree to which they are centrally controlled and branded.

Looking for public archaeology in the web archives by Lorna Richardson. Lorna examines the way that the public thinks of archaeology, using web archives as her evidence base.

We’d like to thank our 10 bursary holders for their enthusiasm and commitment to the project. Their feedback invaluably informed the development of our web interface, and their case studies are wonderful examples of the subject range and methodological variety of research that can be carried out using web archives.

Please follow and like us:

IHR workshop on web archiving


On 11 November the IHR held a workshop, ‘An Introduction to Web Archiving for Historians‘, for which we welcomed back two old friends from the BUDDAH project as speakers.

The day opened with Jane Winters talking about why historians should be using web archives. You can see the slides of Jane’s talk here, including a couple courtesy of a fascinating presentation from Andy Jackson about the persistence of live web pages. This was followed by Helen Hockx-Yu, formerly of the British Library’s web archiving team but now at the Internet Archive. Helen described the Internet Archive’s mind-boggling scale and its ambitious plans for the future; Helen’s slides are here. Jane then returned to talk about the UK Government Web Archive and Parliament Web Archive (more slides here).

After having heard about various web archives, attendees were able to try the Shine interface for themselves. This is an interface to a huge archive – the UK’s web archive covering 1996 to 2013 – all now searchable as full text. Shine was one of the major outputs of the BUDDAH project and we were delighted to see how fast and responsive it now is, thanks to the continuing work of the web archiving team at the British Library.

Before lunch there was time for Marty Steer to lead a quick canter through the command line tool wget. Marty explained how flexible this tool is for downloading web pages or whole sites (and the importance of using the settings provided to avoid hammering sites with a blizzard of requests). You can even use wget to create complete WARC files. Marty’s presentation, with all of the commands used, can be read here.

After lunch Rowan Aust of Royal Holloway described her research on the BBC’s reaction to the Jimmy Savile scandal and how it has removed Savile from instances of its web and programme archives. Rowan’s earlier account of the research, written for the BUDDAH project, is on our institutional repository.

Then it was back to the command line, as Jonathan Blaney explained how easy it is to interrogate very large text files by typing a few lines of text. On Mac and Linux machines a fully-featured ‘shell’, bash, is provided by default; for this session using Windows Jonathan had installed ‘Git bash’, a free, lightweight version of bash (there are useful installation instructions here). The group looked at links to external websites in the Shine dataset, using a sample of a random 0.01% of the full link file; this still amounted to about 1.5 million lines (the full file, at 19GB, can be downloaded from the UKWA site). The main command used for this hands-on session was grep, a powerful yet simple search utility which is ideal for searching very large files or numbers of files.

The day ended with the group using, a free online tool which allows the archiving of web pages through a simple and intuitive interface.

We’d like to thank everyone who came to the workshop: this was the first time we had run such an event on web archiving and their enthusiastic participation and constructive feedback have given us the confidence to run this course again in the future.

Please follow and like us:

New IHR Digital project to be funded by AHRC


IHR Digital is very pleased to announce that we have been awarded funding by the Arts and Humanities Research Council for a new project called the Thesaurus of British and Irish History as SKOS (TOBIAS).

The IHR will publish as a web ontology the Bibliography of British and Irish History’s subject classification of 8,800 terms for British and Irish history. This will provide a comprehensive, standard resource for all British and Irish history projects wishing to expose their data and link it to other projects using the Resource Description Framework. The benefit of linked data is that it is possible to find data in that format which could not be found using conventional search. Web ontologies are linked together to form the framework of the ‘semantic web’, and the TOBIAS project aims to embed a rigorous vocabulary of British and Irish history into that framework.

Please follow and like us:

Toronto summer interns’ work


Here Kaspar writes about the work of the project’s two summer interns:

Roman Polyanovsky and Tim Alberdingk Thijm, two Computer Science undergraduates who were working as summer interns on the Dilipad project, have created a video to showcase the project to a general audience. The video gives a good insight into how our digital parliamentary corpora are constructed (without getting into technical details) and shows some exciting preliminary research results.

You watch the video by following the link below:

Roman’s work consisted of transforming the OCR-ed text of the Canadian proceedings from its raw form, all the way into a richly annotated XML dataset. This implied overcoming various challenges, such as optimising complex regular expressions to extract the multiple crucial entities that appear in the proceedings, like speaker names, topic titles etc. Because of the noise caused by OCR errors, the regular expressions had to be fine-tuned so they won’t be too conservative – and exclude all slight deviations due to OCR errors – or too general, which would equally lead to information loss. Roman has put a lot of energy into preserving the elaborate topic structure of the original proceedings, which changes over time and differs slightly from the UK Hansards. To accomplish these goals, he has built a general and flexible regex transformer that not only accurately converts parliamentary text to XML, but can also be applied to any other type of (political) text with only minor changes.

Tim focused on both enhancing the database and performing prospective research on the discourse on migration. The database part of his internship comprised of improving the biographical information on MPs in the UK, which concretely boiled down to adding and correcting party information and solving multiple disambiguation problems. Because members were originally identified and linked with string matching techniques, those who shared the same name were often assigned to the same ids. Tim managed to reduce to number of ambiguous MPs – or those without correct biographical information – to almost zero, and has thus significantly improved the overall quality of the Dilipad metadata. Besides this, he made sure that almost all of the UK MPs were properly linked to external databases such as DBPedia and Wikipedia. In his research Tim examined the ideological differences in debates on immigration in the UK Parliament, and applied Structural Topic Models to extract Labour and Conservative frames on this topic.

Please follow and like us:

Visualizing Parliamentary Discourse with Word2Vec and Gephi.


This is a post by Kaspar Beelen. Kaspar is a post-doctoral researcher with the Toronto part of the project team.


For historians, the idea of “automated” content analysis is still contested and treated with a justified dose of suspicion. How can one interpret texts, make claims on their meaning, on the basis of just quantitative output? Text-mining techniques, such as supervised classification, are directly transferred from exact sciences to the humanities, even though many of these methods were not intended to serve as an instrument of content analysis. They are nevertheless used to detect and analyze traces of – for example – ideology and gender in text. Despite the reliance on often advanced algorithms, many studies come to something of a dead end when being forced to interpret their model in the form of wordlists or word clouds, both being nothing more than a collection of lexical units devoid of context. Although the text is processed rigorously, the substance of the interpretation frequently depends on a rather arbitrary reading of a small set of phrases, which, due to their lack of context, are highly ambiguous. Many studies traditionally show the top twenty most important features that characterize certain ideological or gender perspectives, to prove that their classifier or other instrument successfully registers what it is trained to do. But what about the next 100 elements? And, more importantly, what is the interrelationship between all these significant features? Humanities scholars are often equally – if not more – interested in the structure or the content of the model than its predictive power. It is not my intention to reconcile close and distant reading in just one short blog post, but what I will elaborate upon is how recent advances in both natural language processing and network visualization have made it possible to represent data in way which is more fine-grained and holistic at the same time.

Visualizing “Women’s Interest” in Postwar Westminster

Below I’ll demonstrate how tools as Word2Vec – an unsupervised method for obtaining vector representations of words – in combination with dynamic graphs can shed more light on ongoing debates within Political History and Political Science, such as Women’s Substantive Representation (WSR). In very concise terms, the theory on WSR has hypothesized that increases in women legislators ensure that women’s interests, priorities, and perspectives will be better represented. To what extent can this theory be empirically corroborated by looking at the discursive practices of female MPs?

To track the issues and problems women have focused on after the Second World War, we’ve extracted nouns and adjectives – words most indicative of topics – and calculated to what extent these words characterize (when controlling for ideology) the discourse of female MPs.

Most “female” words for 1945-2015


This procedure generates a list of words that suggest, as expected, that women MPs speak more about “women” and that their social categorization somehow focuses on generational and family related issues (“child”, “age”, “mother” etc.). However interesting, lists like these are not exactly fine-grained and don’t allow us to make holistic claims about the issues women MPs have traditionally prioritized after 1945. Just listing more features would put considerable strain on the researcher and reader alike, and would make the interpretation only more arbitrary. Moreover, it wouldn’t allow us to identify issues, their interrelationship, and their development over time. What we can do is cluster the most “female” features for each legislature between 1945 and 2015 based on the proximity of their vector representations as created with Word2Vec. The vector representations have proven to contain many interesting properties, with the famous observation that when subtracting the vector(“man”) from the vector(“king”) and consequently adding the vector(“woman”), the closest match turns out to be the vector(“queen”).

Besides being successful in solving analogy tasks, Word2Vec turns out to be useful for many other tasks, such as clustering. Creating a vector space model based on all speeches of female MPs enables us to construct a nearest neighbor network of all words which are indicative of women’s parliamentary language. Each word w1 thus becomes a node and is connected to another word/node w2 if the latter appears in the set of n-closest vectors to word w1. The result is a network that at first sight might not seem very illuminating.

graph in stage1

Figure 1: Graph in Stage 1

Luckily we can use Gephi, an excellent visualization tool, to transform this unordered hairball to a neatly structured graph with identifiable clusters. Gephi not only allows one to visualize the network using different layout algorithms, it also comes with many other methods for analyzing its structure. After rearranging the graph using a linear-attraction linear-repulsion model (“Force Atlas”), we apply the “Modularity” algorithm to detect communities in the network and color each cluster separately. The result looks as follows:

graph after running layout algorithm and colored by cluster

Figure 2: Graph after running layout algorithm and colored by cluster

The motivation for constructing networks as these is only partly aesthetic. Although it might not seem obvious at first glance, the graph provides a framework for studying the lexical choice of women legislators over time as well as across party. Firstly we can separately scrutinize the clusters and identify the issues they capture. The figures below zoom in on the different communities the Modularity algorithms has detected.

graph representing the Education cluster

Figure 3: Graph representing the “Education” cluster

graph representing the fertility cluster

Figure 4: Graph representing the “Fertility” cluster

Secondly, and even more interesting for historians, Gephi supports the construction of dynamic graphs which have the ability to visualize change over time. These lexical networks capture and visualize multiple attributes, be it the frequency, the weight assigned to words by the feature selection algorithm (to what extent a word is indicative for a certain perspective), or party (i.e. if female Conservative MPs use a word more than female Labour MPs). As an example, the figure below shows a cluster of words relating to “poverty and inequality” for two different periods, with the node color indicating party (blue meaning Conservative, red standing for Labour and yellow saying that female members of both parties use a word more than their male party members).

the equality cluster for 1945 to 1951

Figure 5: the “Equality” cluster for 1945-1951 (first Attlee Cabinet)

the equality cluster for 1987 to 1992

Figure 6: the “Equality cluster for 1987-1992 (last Thatcher Cabinet)

The subgraph suggests that in the early postwar years, during the Attlee premiership, the “poverty” theme was mainly a priority of Labour women. However, during Thatcher’s last ministry both Conservative and Labour women equally prioritized this topic when speaking of the “poorest”, the “vulnerable” and “poverty”. Besides the overlap, they also employed a distinctly different jargon. Labour women talked more about “inequality” and the “[poverty] trap”, while their Conservative female colleagues concentrated on the “needy” and “poorer”.

The options for research are virtually endless and largely depend on the hypothesis or question being investigated.


Although these lexical networks provide a valuable framework for exploratory research on the historically changing discourse of gender and party, this might not always be the most convenient way for presenting the results. Gephi allows the user to export all the data again to a spreadsheet and analyze the network quantitatively in a more straightforward manner. Based on the network created in the aforementioned paragraphs, we can demarcate the periods during which certain clusters ranged more prominently. The following table lists the lexical communities that appeared mainly during the fifties and the sixties, suggesting that women’s interventions in the House of Commons concentrated on the practical aspect of everyday life and consumption affairs.

Table 2: Cluster for the 1945-1965

 Cluster ID  Words
1 cooking, hot, wash, laundry, apparatus, lavatory, luxury, kitchen, catering, appliance, electric, room, cleaning, portable, refrigerator, cooker, fireplace, analgesia, bathroom, bath, washing
2 foodstuff, cabbage, tomato, vitamin, glut, production, cereal, import, fruit, exporter, pear, overseas, banana, lettuce, vegetable, potato, protein, strawberry, tinned, foreign, wholesaler, decontrol, importation, imported, importer, dried, apple, carrot, export
3 soap, jam, coffee, confectionery, powder, cocoa, coupon, glove, bean, cream, ice, tin, chocolate, sandwich, sweet, biscuit

For the decades after Blair’s landslide victory in 1997, a whole new range of topics has appeared, such as transport, crime, violence and fertility.

Table 3: Cluster for the 1997-2014

 Cluster ID Words
1 passenger, emission, plane, freight, commuter, network, rail, operator, concessionary, fare, ticket, carriage, aviation, infrastructure, airline, franchising, airport, train, bus, booking, season, transport, railway, railtrack, franchise
2 perpetrator, malnutrition, harassment, abus, graffiti, suffering, sexual, victim, antisocial, fly, pain, assault, gross, violence, harm, domestic, tipper, crime, violent, behaviour, distress, litter, rape, abuse
3 genetic, experimentation, tissue, reproductive, stem, therapeutic, cell, gene, sperm, insemination, artificial, technique, fertilisation, gamete, implant, embryo, embryology


Please follow and like us:

Project case studies now available


We are delighted that we can now make available five of the case studies written by researchers across the humanities and social sciences. More will be available via this blog soon.

At the beginning of the project we had a number of aspirations for what the case studies could achieve. Firstly, of course, we wanted to show the variety of research that could be undertaken across different disciplines with web archives. Secondly, we wanted the researchers to give us feedback on the interface to the archive that the project was developing (this they did at monthly meetings) and we are very grateful to them for attending and giving their views; this process improved the interface markedly. Thirdly we hoped that some of the researchers might become advocates for web archiving among their peers.

The last is already being realised. At a conference on web archiving in Aarhus in June, no fewer than four of our researchers gave papers. Given their enthusiasm, we are sure that they will also present their work at events in their own subject areas.

The first five case studies we are marking available are:

Please follow and like us:

Sources of Evidence for Automatic Indexing of Political Texts


Political texts on the Web, documenting laws and policies and the process leading to them, are of key importance to government, industry, and every individual citizen.  Yet access to such texts is difficult due to the ever increasing volume and complexity of the content, prompting the need for indexing or annotating them with a common controlled vocabulary or ontology.

We investigated the effectiveness of different sources of evidence: such as the labeled training data, textual glosses of descriptor terms, and the thesaurus structure for automatically indexing political texts.

The main findings are the following.

First, using a learning to rank approach integrating all features, we observe significantly better performance than previous systems.

Second, the analysis of feature weights reveals the relative importance of various sources of evidence, also giving insight in the underlying classification problem. Interestingly we found that the most important part of political documents is their title.

The research was done by University of Amsterdam’s researchers: Mostafa Dehghani, Hosein Azarbonyad, Maarten Marx, and Jaap Kamps; the results were presented as a poster at the 37th European Conference on Information Retrieval and won the best poster award. The original paper is available here.


M. Dehghani, H. Azarbonyad, M. Marx, and J. Kamps. Sources of evidence for automatic indexing of political texts. In A. Hanbury, G. Kazai, A. Rauber, and N. Fuhr, editors, Advances in Information Retrieval, volume 9022 of Lecture Notes in Computer Science, pages 568–573. Springer International Publishing, 2015. ISBN 978-3-319-16353-6. doi: 10.1007/978-3-319-16354-3 63. URL 10.1007/978-3-319-16354-3_63.

Please follow and like us:

The Historical Aspects of Dilipad: Challenges and Opportunities


This post originally appeared on the Digging into Linked Parliamentary Data project blog, and is a guest post by one of the historians working the project, Luke Blaxill.

The Dilipad project is on one hand exciting because it will allow us to investigate ambitious research questions that our team of historians, social and political scientists, and computational linguists couldn’t address otherwise. But it’s also exciting precisely because it is such an interdisciplinary undertaking, which has the capacity to inspire methodological innovation. For me as a historian, it offers a unique opportunity not just to investigate new scholarly questions, but also to analyse historical texts in a new way.

We must remember that, in History, the familiarity with corpus-driven content analysis and semantic approaches is minimal. Almost all historians of language use purely qualitative approaches (i.e. manual reading) and are unfamiliar even with basic word-counting and concordance techniques. Indeed, the very idea of ‘distant reading’ with computers, and categorising ephemeral and context-sensitive political vocabulary and phrases into analytical groups is massively controversial even for a single specific historical moment, let alone diachronically or transnationally over decades or even generations. The reasons for this situation in History are complex, but can reasonably be summarised as stemming from two major scholarly trends which have emerged in the last four decades. The first is the wide-scale abandonment of quantitative History after its perceived failures in the 1970s, and the migration of economic history away from the humanities. The second is the influence of post-structuralism from the mid-1980s, which encouraged historians of language to focus on close readings, and shift from the macro to the micro, and from the top-down to the bottom-up. Political historians’ ambitions became centred around reconstructions of localised culture rather than ontologies, cliometrics, model making, and broad theories. Unsurprisingly, computerised quantitative text analysis found few, if any, champions in this environment.

In the last five years, the release of a plethora of machine-readable historical texts (among them Hansard) online, as well as the popularity of Google Ngram, have reopened the debate on how and how far text analysis techniques developed in linguistics and the social and political sciences can benefit historical research. The Dilipad project is thus a potentially timely intervention, and presents a genuine opportunity to push the methodological envelope in History.

We aim to publish outputs which will appeal to a mainstream audience of historians who will have little familiarity with our methodologies, rather than to prioritise a narrower digital humanities audience. We will aim to make telling interventions in existing historical debates which could not be made using traditional research methods. With this in mind, we are pursuing a number of exciting topics using our roughly two centuries-worth of Parliamentary data, including the language of gender, imperialism, and democracy. While future blog posts will expand upon all three areas in more detail, I offer a few thoughts below on the first.

The Parliamentary language of gender is a self-evidently interesting line of enquiry during a historic period where the role of women in the political process in Great Britain, Canada, and the Netherlands was entirely transformed. There has been considerable recent historical interest on the impact of women on the language of politics, and female rhetorical culture. The Dilipad project will examine differences in vocabulary between male and female speakers, such as on genre of topics raised, and also discursive elements, hedging, modality, the use of personal pronouns and other discourse markers- especially those which convey assertiveness and emotion. Next to purely textual features we will analyse how the position of women in parliament changed over time and between countries (time they spoke, how frequently they were interrupted, the impact of their discourse on the rest of the debate etc.).

A second area of great interest will be how women were presented and described in debate – both by men and by other women. This line of enquiry might present an opportunity to utilise sentiment analysis (which in itself would be methodologically significant) which might shed light on positive or negative attitudes towards women in the respective political cultures of our three countries. We will analyze tone, and investigate what vocabulary and lexical formations tended to be most associated with women. In addition, we can also investigate whether the portrayal of women varied across political parties.

More broadly, this historical analysis could help shed light on the broader impact of women in Parliamentary rhetorical culture. Was there a discernible ‘feminized language of politics’, and if so, where did it appear, and when? Similarly, was there any difference in Parliamentary behaviour between the sexes, with women contributing disproportionately more to debates on certain topics, and less to others? Finally, can we associate the introduction of new Parliamentary topics or forms of argument to the appearance of women speakers?

Insights in these areas – made possible only by linked ‘big data’ textual analysis – will undoubtedly be of great interest to historians, and will (we hope) demonstrate the practical utility of text mining and semantic methodologies in this field.

Please follow and like us:

Wliat’s in a n^me? Post-correction of randomly misrecognized names in OCR data


This post originally appeared on the Digging into Linked Parliamentary Data project blog, and is a guest post by team member Kaspar Beelen.


Notwithstanding the recent optimization of Optical Character Recognition (OCR) techniques, the conversion from image to machine-readable text remains, more often than not, a problematic endeavor. The results are rarely perfect. The reasons for the defects are multiple and range from errors in the original prints, to more systemic issues such as the quality of the scan, the selected font or typographic variation within the same document. When we converted the scans of the historical Canadian parliamentary proceedings, especially the latter cause turned out to be problematic. Typographically speaking, the parliamentary proceedings are richly adorned with transitions between different font types and styles. These switches are not simply due to the esthetic preferences of the editors, but are intended facilitate reading by indicating the structure of the text. Structural elements of the proceedings such as topic titles, the names of the MPs taking the floor, audience reactions and other crucial items, are distinguished from common speech by the use of bold or cursive type, small capital or even a combination.

Moreover, if the scans are not optimized for OCR conversion, the quality of the data decreases dramatically as a result of typographic variation. In the case of the Belgian parliamentary proceedings, a huge effort was undertaken to make historical proceedings publicly available in PDF format. The scans were optimized for readability, but seemingly not for OCR processing, and unsurprisingly the conversion yielded to a flawed and unreliable output. Although one might complain about this, it is at the same time highly unlikely that, considering the costs of scanning more than 100.000 pages, the process will be redone in the near future, so we have no option but to work with the data that is available.

Because of the aforementioned reason, names, printed in bold (Belgium) or small capital (Canada), ended up misrecognized in an almost random manner, i.e. there was no logic in the way the software converted the name. Although it showcases the inventiveness of the OCR system, it makes linking names to an external database almost impossible. Below you see a small selection of the various ways ABBYY, the software package we are currently working with, screwed up the name of the Belgian progressive liberal “Houzeau the Lehaie”:

Table 1: Different outputs for “Houzeau the Lehaie”

Houzeau de Lehnie. Ilonzenu dc Lehnlc. lionceau de Lehale.
Ilonseau de Lehaie. Ilonzenu 4e Lehaie. HouKemi de Lehnlc.
lionceau de Lehaie. Honaeaa 4e Lehaie. Hoaieau de Lehnle.
Ilonzenn de Lehaie. Heaieaa ée Lehaie. Homean de Lehaie.
Heazeaa «le Lehaie. Houzcait de Lekale. Houteau de Lehaie.
Hoiizcan de Lchnle. Henxean dc Lehaie. Houxcau de Lehaie.
Hensean die Lehaie. IleuzeAit «Je Lehnie. Houzeau de Jlehuie.
Ileaieaa «Je Lehaie. Honzean dc Lehaie Houzeau de Lehaic.
Hoiizcnu de Lehaie. Honzeau de Lehaie. Ilouzeati de Lehaie.
Houxean de Lehaie. Hanseau de Lehaie. Etc.

Although the quality of the scanned Canadian Hansards is significantly better, the same phenomenon occurs.

 Table 2: Sample of errors spotted in the conversion Canadian Hansards (1919)


In many other cases even an expert would have hard time figuring out to whom the name should refer to.

Table 3: Misrecognition of names

I* nréeldcn*.

These observation are rather troubling, especially with respect to the construction linked corpora: even if, let’s say, 99% of the text is correctly converted, the other 1% will contain many of the most crucial entities, needed for marking up the structure and linking the proceedings to other sources of information. To correct the tiny but highly important 1%, I will focus in this blog post on how to automatically normalize speaker entities, those parts of proceedings that indicate who is taking the floor. In order to retrieve context information about the MPs, such as party and constituency, we have to link the proceedings our biographic databases. Linking will only be possible of the speaker entities in the proceedings match those in our external corpus.

In most occasions speaker entities include a title and a name followed by optional elements indicating the function and/or the constituency of the orator. The semicolon forms the border between the speaker entity and the actual speech. In a more formal notation, a speaker entity consists of the following pattern:

Mr. {Initials} Name{, Function} {(Constituency)}: Speech.

Using regular expression we can easily extract these entities. The result of this extraction is summarized by the figures below, which show the frequency with which the different speaker entities occur.

 Figure 1: Distribution of extracted speaker entities (Canada, 1919)





Figure 2: Distribution of extracted speaker entities (Belgium, 1893)





The figures lay bare the scope of the problem caused by these random OCR errors in more detail. Ideally there shouldn’t be more speaker entities than there are MPs in the House, which is clearly not the case. As you can see for the Belgian proceedings from the year 1893, the set of items occurring once or twice alone contains around 3000 unique elements. The output for the Canadian Hansards from 1919, looks slightly better, but there are still around 1000 almost unique items. Also, as is clear from the plots, the distribution of the speakers is more right skewed, due to the large amount of unique and wrongly recognized names in the original scans. We will try to reduce the right-skewedness by replacing the almost unique elements with more common items.


In a first step we set out to replace these names with similar items that occur more frequent. Replacement happens in two consecutive rounds: First by searching in the local context of the sitting, and secondly by looking for a likely candidate in the set of items extracted from all the sittings of a particular year. To measure whether two names resemble each other, we calculated cosine similarity, based on n-grams of characters, with n running from one to four.

More formally, the correction starts with the following procedure:

More formallyAs shown in table 4, running this loop yields many replacement rules. Not all of them are correct so we need manually filter out and discard any illegitimate rules that this procedure has generated.

 Table 4: Selection of rules generated by above procedure

Legitimate rules Illegitimate rules

Just applying these corrected replacement rules, would increase the quality of the text material a lot. But, as stated before, similarity won’t suffice when quality is awful, such as is the case for the examples shown in table 2. We need to go beyond similarity, but how?

The solution I propose is to use the replacement rules to train a classifier and consequently apply the classifier to instances that couldn’t be assigned to a correction during the previous steps. OCR correction thus becomes a multiclass classification task, in which each generated rule is used as a training instance. The right-hand side of the rule represents the class or the target variable. The left-hand side is converted to input variables or features. After training, the classifier will predict a correction, given a misrecognized name as input. For our experiment we used Multinomial Naïve Bayes, trained with n-grams of characters as features, with n againg ranging from 1 to 4. This worked surprisingly well: 90% of the rules it created were correct. Only around 10% of the rules generated by the classifier were either wrong or didn’t allow us to make a decision. Table 4 shows a small fragment of the rules produced by the classifier.

Table 5: Sample of classifier output given input name

Input name Classifier output
,%nsaaeh-l»al*saai. Anspach-Puissant.
aandcrklndcrc. Vanderkindere.
fiillleaiix. Gillieaux.
IYanoerklnaere. Vanderkindere.
I* nréeldcn*. le président.
Ilellcpuitc. Helleputte.
Thlcapaat. Thienpont.


As you can see in table 5, the predicted corrections aren’t necessarily very similar to the input name. If just a few elements are stable, the classifier can pick up the signal even when there is a lot of noise. Because OCR software mostly recognizes at a handful characters consistently, this method seems to perform well.

To summarize: What are the strong points of this system? First of all, it is fairly simple, reasonably time-efficient and works even when the quality of the original data is very bad. Manual filtering can be done quickly: for each year of data, it takes an hour or two to correct the rules generated by each of the two processes and replace the names.  Secondly: Once a classifier is trained, it can also predict corrections for the other years of the same parliamentary session. Lastly, as mentioned before, the classifier can correctly predict replacements just on the basis of a few shared characters.

Some weak points need to be addressed as well. The system still needs supervision. But, nonetheless, this is worth the effort, because it can enhance the quality of the data significantly, especially with respect to linking the speeches in a later stage. In some cases, however, it can be impossible to assess whether a replacement rule should be kept or not. Another crucial problem is that the manual supervision needs to be done by experts who are familiar both with the historical period of the text and with the OCR errors. That is, the expert has to know which names are legal and also has to be proficient in reading OCR errors.

At the moment, we are trying to improve and expand the method. So far, the model uses only the frequency of n-grams, and not their location in a token. By taking location into account, we expect that we could improve the results, but that would also increase dimensionality. Besides adding new features, we should also experiment with other algorithms, such as support-vector machines, which perform better in a high-dimensional space. We will also test whether we can expand the method to correct other structural elements of the parliamentary proceedings, such as topical titles.

Please follow and like us:

Accelerating and validating ex post facto hypothesis formulation


This is a guest post by Evelijn Martinius, highlighting findings from her internship on the project in January 2015:

Researchers often set out to test their hypothesis, but the question I would like to pose in the post is: how do they come to frame their hypothesis? Generally, researchers test what factors seem to be associated with certain conditions. The problems with this type of “ex post facto” hypothesis formulation is first of all the level of generalization. As these hypotheses are usually formulated with a specific event in mind, it might be difficult to generalize the findings. Secondly, observing the behavior of a pre-established dependent variable might be more prone to lead you to correlations between variables instead of causal relations. Most importantly it is also argued that the non-randomized sample that is selected can threaten the research validity.

The character of the Dilipad database allows us to solve some of these disadvantages quite easy. The formulation of the hypothesis benefits from the homogeneous data that still holds a lot of variance due to the large sample size. A simple method, like for example counting keywords, could give a quick and easy insight into the data and therefore stimulate a faster and easier hypothesis formulation.

Let’s take an example, using a simple method like counting keywords in the Dilipad database. It is often assumed that after the depillarization in the Netherlands politicians are more prone to ‘personalization’ (Kriesi, 2012; McAllister, 2009). Personalization differs from individualization because the public function remains dominant over the person’s image. Personalization can highlight idiosyncrasies in different wording during debates because the focus is on the person rather than the party (Bennett, 2012). Being the first post-war cabinet during the pillarization, we can expect that politicians in the first cabinet of Drees referred to their followers more than politicians in the first cabinet of Kok. We can test this expectation for instance by counting the usage of ‘I’ and ‘we’ for these two parliamentary terms.

Counting ‘I’ and ‘we’ in 1951 and 1995 the first results seem to indicate little change of the usage of the word “I”. Current political theory would have a hard time explaining this, which might make it an interesting topic to examine more in depth.

While the Dilipad database allows us to speed up the process of hypothesis formulation, there is another promising quality to it. The focus on statistical significant outcomes might lead us to focus on events, as it is hard to explain the variance that we see over time theoretically or statistically. The example of the word count given with ‘I’ and ‘we’ shows the difficulty immediately; we cannot explain this variance with current political theory, but perhaps from a sociolinguistic perspective there is an explanation for this. However, larger databases like Dilipad allow us to increase our timeline and see how the variance developed between 1951 and 1995, perhaps eventually discovering new insights from that pattern. We could also count more keywords and see what this does to the variance. Working with the Dilipad data and using the search engine of PoliticalMashup makes hypothesis formulation a lot faster and easier, which leaves us with more time to focus on examining new topics.


Bennet, W.L. (2012). ‘The personalization of politics: political identity, social media and changing patterns of participation.’ The ANNALS of American Academy of Political and Social Science, 644(1),20-39.

Kriesi, H. (2012). ‘Personalization of national election campaigns.’ Party Politics, 18(6), 825-844.

McAllister, I. (2009). ‘The personalization of politics.’ In R.J. Dalton and H.-D. Klingemann (ed.), The Oxford Handbook of Political Behavior, (pp. 571-588). Oxford: Oxford University Press.

Please follow and like us: