Читайте также:
|
|
The modern world is overrun with all kinds of competing propaganda and counterpropaganda and a vast variety of other symbolic activities, such as education, publishing, newscasting, and patriotic and religious observances. The problem of distinguishing between the effects of one's own propaganda and the effects of these other activities is often extremely difficult.
The ideal scientific method of measurement is the controlled experiment. Carefully selected samples of members of the intended audiences can be subjected to the propaganda while equivalent samples are not. Or the same message, clothed in different symbols—different mixes of sober argument and “casual” humour, different proportions of patriotic, ethnic, and religious rationalizations, different mixes of truth and the “noble lie,” different proportions of propaganda and coercion—can be tested on comparable samples. Also, different media can be tested to determine, for example, whether results are better when reactors read the message in a newspaper, observe it in a spot commercial on television, or hear it wrapped snugly in a sermon. Obviously the number of possible variables and permutations in symbolism, media use, subgrouping of the audience, and so forth is extremely great in any complicated or long-drawn-out campaign. Therefore, the costs for the research experts and the fieldwork that are needed for thorough experimental pretests are often very high. Such pretests, however, may save money in the end.
An alternative to controlled experimentation in the field is controlled experimentation in the laboratory. But it may be impossible to induce reactors who are truly representative of the intended audience to come to the laboratory at all. Moreover, in such an artificial environment their reactions may differ widely from the reactions that they would have to the same propaganda if reacting un-self-consciously in their customary environment. For these and many other obvious reasons, the validity of laboratory pretests of propaganda must be viewed with the greatest caution.
Whether in the field or the laboratory, the value of all controlled experiments is seriously limited by the problem of “sleeper effects.” These are long-delayed reactions that may not become visible until the propaganda has penetrated resistances and insinuated itself deep down into the reactor's mind—by which time the experiment may have been over for a long time. Another problem is that most people acutely dislike being guinea pigs and also dislike the word propaganda. If they find out that they are subjects of a propagandistic experiment, the entire research program, and possibly the entire campaign of propaganda of which it is a part, may backfire.
Another research device is the panel interview—repeated interviewing, over a considerable period of time, of small sets of individuals considered more or less representative of the intended audiences. The object is to obtain (if possible, without their knowing it) a great deal of information about their life-styles, belief systems, value systems, media habits, opinion changes, heroes, role models, reference groups, and so forth. The propagandist hopes to use this information in planning ways to influence a much larger audience. Panel interviewing, if kept up long enough, may help in discovering sleeper effects and other delayed reactions. The very process of being “panel interviewed,” however, produces an artificial environment that may induce defensiveness, suspicion, and even attempts to deceive the interviewer.
For many practical purposes, the best means of measuring—or perhaps one had better say estimating—the effects of propaganda is apt to be the method of extensive observation, guided of course by well-reasoned theory and inference. “Participant observers” can be stationed unobtrusively among the reactors. Voting statistics, market statistics, press reports, police reports, editorials, and the speeches or other activities of affected or potentially affected leaders can also give clues. Evidence on the size, composition, and behaviour of the intermediate audiences (such as elites) and the ultimate audiences (such as their followers) can be obtained from these various sources and from sample surveys. The statistics of readership or listenership for printed and telecommunications media may be available. If the media include public meetings, the number of people attending and the noise level and symbolic contents of cheering (and jeering) can be measured. Observers may also report their impressions of the moods of the audience and record comments overheard after the meeting. To some extent, symbols and leaders can be varied, and the different results compared.
Using methods known in recent years as content analysis, the propagandist can at least make reasonably dependable quantitative measurements of the symbolic contents of his own propaganda and of communications put out by others. He can count the numbers of column inches of printed space or seconds of radio or television time that were given to the propaganda. He can categorize and tabulate the symbols and themes in the propaganda. To estimate the implications of the propaganda for social policy, he can tabulate the relative numbers of expressed or implied demands for actions or attitude changes of various kinds. The 1970 edition of volume 1 of the Great Soviet Encyclopedia, for example, had no pictures of Stalin; in the previous edition, volume 1 had four pictures. Did this mean that a new father figure and role model was being created by the Soviet propagandists? Or did it indicate a return to the cult of older father figures such as Marx and Lenin? If so, what were the respective father figures' traits, considered psychoanalytically, and the political, economic, and military implications for Soviet policy?
By quantifying their data about contents, propagandists can bring a high degree of precision into experiments using different propaganda contents aimed at the same results. They can also increase the accuracy of their research on the relative acceptability of information, advice, and opinion attributed to different sources. (Will given reactors be more impressed if they hear 50, 100, or 200 times that a given policy is endorsed—or denounced—by the president of the United States, the president of Russia, or the pope?)
Very elaborate means of coding and of statistical analysis have been developed by various content analysts. Some count symbols, some count headlines, some count themes (sentences, propositions), some tabulate the frequencies with which various categories of “events data” (newspaper accounts of actual happenings) appear in some or all of the leading newspapers (“prestige papers”) or television programs of the world. Some of these events data can be counted as supporting or reinforcing the propaganda, some as opposing or counteracting it. Whatever the methodology, content analysis in its more refined forms is an expensive process, demanding long and rigorous training of well-educated and extremely patient coders and analysts. And there remains the intricate problem of developing relevant measurements of the effects of different contents upon different reactors.
Дата добавления: 2015-11-14; просмотров: 63 | Нарушение авторских прав
<== предыдущая страница | | | следующая страница ==> |
MEDIA OF PROPAGANDA | | | WORLD-LEVEL CONTROL OF PROPAGANDA |