Студопедия
Случайная страница | ТОМ-1 | ТОМ-2 | ТОМ-3
АвтомобилиАстрономияБиологияГеографияДом и садДругие языкиДругоеИнформатика
ИсторияКультураЛитератураЛогикаМатематикаМедицинаМеталлургияМеханика
ОбразованиеОхрана трудаПедагогикаПолитикаПравоПсихологияРелигияРиторика
СоциологияСпортСтроительствоТехнологияТуризмФизикаФилософияФинансы
ХимияЧерчениеЭкологияЭкономикаЭлектроника

Nomological validation

Adcock and Collier’s third and last form of validation is “nomological validation”: a measurement is valid if a reasonably well-established hypothesis is confirmed with our own measurement. To assess this form of validity we have explored a well-established hypothesis in which our systematized concept is expected to play a role. Many scholars agree that in contemporary Western Europe most populist parties are parties of the far right, and therefore exclusionary towards groups that they do not consider as being part of “their” people (Albertazzi and McDonnell 2008b; Taggart 2004; Mudde 2007; Betz and Johnson 2004; Rydgren 2005).

A minority of scholars considers exclusionism to be a defining element of populism. They see it as a third dimension of populism, next to people-centrism and anti-elitism. Albertazzi and McDonnel (2008b), for instance, argue that populism is an ideology that pits the virtuous people against a set of elites and “dangerous others” (see also Taguieff 1995: 34-35). Depending on the context, these “dangerous others” could be immigrants, unemployed, or people of another religion or race. Most scholars do not agree on this three-dimensional definition of populism. With them we argue that although populism and exclusionism will often go hand in hand, exclusionism is not a definitional characteristic of populism. Parties can be populist without being exclusionist.

To test the hypothesis that populism is related to exclusionism, we have measured the degree of exclusionism in election manifestos. Exclusionism refers mainly to immigrants in this context, since this group is the most targeted by populists in Western Europe. In the classical content analysis it was measured by means of the question whether the writers of the manifesto express negative opinions regarding “others” (i.e. persons or groups who are perceived as not belonging to “the people” the authors identify themselves with) (see Appendix A). The words with which this variable has been measured in the automatic content analysis can be found in Appendix B. It seems that the more populist parties are also the more exclusionist parties. There is a correlation of r =.67 (significant at p<.01) in the classical content analysis. The Pearson correlation coefficient in the automatic content analysis is r =.85 (significant on p<.01). The hypothesis is confirmed by both methods, which means that the results of both methods are nomologically valid.

There is, however, one important exception to the correlation between populism and exclusionism: the Dutch SP. Although this party was strongly populist in 1994, it was barely, exclusionist in that year. This observation is confirmed by both methods. This seems to indicate that although populism and exclusionism might go hand in hand in many circumstances, this is not necessarily the case. The “social populists” of the SP show that exclusionism should not be seen as one of the defining characteristics of populist parties.

We can make a distinction between three types of reliability: stability, inter-coder reliability and accuracy (Krippendorff 2004). Stability is the degree to which a measure does not change over time. It can be assessed when one coder codes the same text more than once. Krippendorff sees stability as the weakest form of reliability. Inter-coder reliability is the extent to which different coders code the same text in the same way. This can be assessed by looking at the agreements and disagreements between coders. Accuracy refers to the extent to which the measurement of a text corresponds to a standard or norm. Unfortunately, researchers usually do not possess such standards. As a result, accuracy measures are seldom used in content analysis. Most researchers therefore focus on inter-coder reliability when they assess the reliability of their analysis.

In order to prevent low inter-coder reliability measures in the classical content analysis, we have extensively trained our seven coders (three from the United Kingdom and four from the Netherlands). Every coder has attended three training sessions in which the codebook was explained and in which we discussed coding examples. In-between the training sessions the coders had to complete take-home exercises. After the training sessions, we assessed the inter­coder reliability. Coders had to complete two reliability tests. First, all coders had to analyze a sample of paragraphs from British election manifestos, so we could calculate whether the cross-national inter-coder reliability was sufficient (the Dutch coders speak English too). We have calculated the inter-coder reliability using Krippendorffs alpha. The results for cross­national reliability are a =.74 for people-centrism, a =.76 for anti-elitism, and a =.88 for exclusionism. Second, all Dutch coders had to analyze another sample of paragraphs from Dutch manifestos as well, so we were able to assess the national inter-coder reliability in the Netherlands. For people-centrism a =.78, for anti-elitism a =.84, and for exclusionism a =.80. The national inter-coder reliability in the UK was assessed by analyzing the agreement between the three British coders regarding the sample of paragraphs from the British manifestos. The results are a =.73 for people-centrism, a =.66 for anti-elitism, and a =.78 for exclusionism. Although the alpha statistic for anti-elitism in the national reliability test in the United Kingdom is somewhat critical, the statistics in general are satisfactory.[70]

We can be short about the reliability of the computer-based content analysis. Since a computer produces the exact same results no matter how many times one runs the analysis, it could be argued that the reliability is 100 per cent. At the same time there is often discussion about which texts should be used for the analysis. Since the results could differ depending on which sources are being used, we should be careful with arguing that the computer-based content analysis is perfectly reliable.

We can conclude that both methods are reliable. The classical analysis is, however, less reliable than the computer-based method. The reason for its weaker performance on reliability is the same as the cause for its better performance on validity: the wider space for contextual interpretation. Because coders have the freedom to interpret the context of the text, it might well be that coder A codes the text in a different way than coder B. As a result, the analysis is less consistent compared to the computer-based method in which no human interpretation is involved.


Дата добавления: 2015-07-10; просмотров: 187 | Нарушение авторских прав


Читайте в этой же книге: XXII. POPULISM, PLURALISM, AND LIBERAL DEMOCRACY | Preface | Introduction on populism and populist parties | Definition and features of populism | Populism and the LST | Conclusion | Abstract | Populism as a thin ideology consisting of two dimensions | Measuring populism: content analysis | The classical content analysis and operationalization |
<== предыдущая страница | следующая страница ==>
Computer-based content analysis and operationalization| Discussion: the trade-off - suggestions for future research

mybiblioteka.su - 2015-2024 год. (0.006 сек.)