Expanded Contents | Figures | Tables 1. Perspective And Summary 15A. Phasing Propositions and Their Evidence on International Conflict Vol. 1: The Dynamic Psychological Field
Democratic Peace page |
There are two kinds of discussion required before I can present the evidence for the propositions. One concerns what evidence is relevant and what quantitative form it should take. The second involves how the evidence was sifted for the propositions.
First, the propositions involve different ranges of conflict behavior. Some refer to conflict behavior in general, such as negative accusations, sanctions, border clashes and war. This can be seen in Figure 16.1 of Chapter 16, where different causes and conditions are shown overlapping or involving the different types of conflict behavior. Only for those propositions about conflict behavior in general, then, would evidence bearing on any kind of conflict behavior bear on the proposition. Moreover, some propositions concern only specific conflict behavior, such as war. Evidence thus concerning nonviolent conflict behavior or low-level violence, like border clashes, would be irrelevant.
A second problem is that the evidence will be based on different conflict samples, of which four kinds must be discriminated, as shown in Table 16C.1.
Not each sample will be relevant to each proposition; and if relevant the nature of the sample may limit the kind of evidence that can be used. For example, for reasons discussed regarding the Disrupted Expectations Proposition 16.1 in Appendix 16B, I expect that the correlation would be positive between disrupted expectations and conflict behavior for the mixed sample, while near zero between this cause and any type of conflict behavior for an the other samples (on interpreting such correlations, see Understanding Correlation). These expectations are shown in Table 16C.1.
The third quantitative problem concerns the conflict variable. Various analyses measure conflict behavior in different ways. In general, we may classify these measurements as shown in Table 16C.2. Some measurements of some conflict variables may be relevant to a proposition, but not all may be. These three dimensions along which to weigh the evidence--proposition, sample, conflict variable--are cross-classifled in Table 16C.1.
The fourth problem in evaluating the evidence is the diverse techniques of analysis employed across the studies surveyed. Correlation techniques (regression, factor analysis, correlation coefficients) are the most common and some techniques can be easily interpreted correlationally (chi-square, t-test). Therefore, in each cell in the table is indicated whether the actual or underlying correlation should be plus (C+), near zero (P or A), or minus (C-). If the correlation can be anything (that is, the sample and conflict variables are irrelevant for the proposition) an R is inserted. In order to facilitate this assessment, each proposition in the Table of Propositions 16B.1 is stated empirically, usually in correlational terms. Each proposition is discussed theoretically and empirically in Appendix 16B and the relevant evidence is more fully considered there.
Figure 16C.1 may help to understand why certain correlations are predicted for each proposition. To predict the correlations between two variables, one must consider the measurements of each and the nature of the cases that will lie above and below each variable's mean. Correlations, after all, are simply reflecting whether the cases for two variables are similarly above and below the mean (yield a C+); or whether when one is above, the other is usually below (C-).
Figure 16C.1 shows the cases which will be above and below the mean for each conflict variable, and the numerical scale also involved. The mean is placed arbitrarily on each scale, and is only meant to show that for many of the conflict variables some of the cases that manifest a high conflict will nonetheless be below the mean on the specific conflict variable.
As an example of the problem here, consider correlating disrupted expectations (Proposition 16.1) with the frequency of violence. Let disrupted expectations be measured as a dichotomy (O = absent; 1 = present). Now, the proposition implies that disrupted expectations should always be present, whatever the conflict behavior. All cases above the mean on violence therefore should have disrupted expectations. But, all those cases below the mean with nonviolent conflict behavior and with lower than average violence also will have disrupted expectations. Only those cases with no conflict behavior will have no disrupted expectations. This means that if the proposition is correct the correlation between disrupted expectations and violence intensity will vary from near zero to moderate positive (C*), depending on relatively how many cases with no conflict behavior are included in the analysis.
The result of similarly predicting the correlation for each proposition for each sample for each conflict variable with regard to Figure 16C.1 is shown in the cells of Table 16C.1.
Some analyses in part may be incorrectly or incompetently done. These I ignore (e.g., Haas, 1974),
Some analyses may have conclusions contrary to their tabulated results and data. I ignore the conclusions and accept the analyses. In some cases, I have redone their analysis (e.g., Naroll, Bullough, and Naroll, 1974).
Some analyses may give more weight to results than I do. For example, from a correlation of .32 an analyst may conclude that there is a positive relationship between borders and conflict. Within the context of the analysis, however, I may conclude that this correlation, for practical purposes, is near zero and therefore not a C+ as I define it. I would like to set a rigid threshold for what correlation I will accept as a positive or negative result, but this is contextual. It depends on the sample, variables, degrees of freedom, techniques, and that "feeling" a researcher develops for important relationships. For me this is often a correlation near plus or minus .50.
Some analysis do not supply sufficient information to assess their correlations (e.g., Phillips and Hainline, n.d.). Sometimes I will accept these correlations, if the work seems otherwise competently done and the correlations are not crucial. However, especially important correlations are ignored if I cannot confidently reconstruct the analysis.
Some analysis operationalize concepts in a way quite different from their meaning here, such as operationalizing system polarity in terms of trade. Then, I will translate the results in terms of their meaning here (polarity means for me centralization of command into opposing camps). In the case of polarity measured by trade, then, I would relate the results to the Cooperation-Conflict Proposition 18.2 of Chapter 18 rather than to the two Polarity Proposition 16.20 and Proposition 16.23 of Chapter 16.
With all the above in mind, Table 16C.3 presents the evidence on Propositions 16.1 to 16.33. It is organized by level, and by study. The parameters of most of the published studies are given in Appendix III.
In evaluating this evidence, it is important to understand how it was collected.
First, since 1958 I have been putting on index cards the results of published systematic analyses relevant to my interests in conflict.
Second, after writing Chapters 2-13 of this Vol. 4: War, Power, Peace, but before writing these empirical chapters and stating the propositions, I surveyed all the empirical analyses I could find (see the discussion in Appendix III), including many of those I had previously indexed, and recorded major relevant correlations and results on index cards. "Relevant" means bearing on international conflict behavior in some way. Of course, I could not record all results (such as a large correlation matrix), and no doubt some relevant to the subsequently formulated propositions fell through the screen.
Third, after the survey, which took about a year, I stated the propositions given in Chapters 15-18 of this Vol. 4: War, Power, Peace. Now, the propositions are meant to make concrete my theoretical analyses of conflict here and in Vol. 1: The Dynamic Psychological Field, Vol. 2: The Conflict Helix, and Vol. 3: Conflict In Perspective. They are not meant to summarize, to reflect, or to consolidate the studies surveyed. Nor were any propositions taken from any other studies.
However, because the propositions were formulated after the survey, it is possible and likely in some cases that in their statement I may have been unconsciously responsive to the accumulated findings of others, not to mention my own past empirical analysis Therefore, Table 16C.3 should be considered more evidence for, than independent tests of the propositions.
Fourth, after the propositions were stated, I then organized all the indexed evidence by proposition and level as shown in Table 16C.3. Because a study may have undertaken more than, one level of analysis, it may appear more than once in the Table.
Fifth, if my indexed evidence were ambiguous or I suspected that the study had much more evidence on the propositions than I had recorded, I went back to the study and worked directly from it if possible. If not, I discarded the evidence.
Nonetheless, I must have missed considerable evidence (minor in the context of an analysis and not part of its conclusions) that bear on the propositions. A careful survey of each study's correlation matrices, factor tables, regression coefficients, and the like would likely increase significantly the evidence in Table 16C.3. But the direction of the evidence I did tabulate is so strong, as I will discuss below, that it seems highly unlikely that a finer screening would alter the overall conclusions.
I suspect that the authors of the studies tabulated in Table 16C.3 and their students will be especially interested in how their results relate to the propositions. Moreover, I hope others will track through such studies to check my tabulation and develop their own. In any can, where the tabulated evidence in Table 16C.3 differs from what the reader believes a study presents, the following should be checked:
Although these numbers give an air of precision, they should be interpreted loosely. The numbers equate different kinds of studies at different levels involving different variables, techniques and competencies.
Moreover, because there are so many results for some propositions, a small number, say one or two, for some other propositions may appear insufficient for a conclusion. Yet, then few results may be from a large-scale team effort involving years of data collection, precise research designs, and careful analysis, far overshadowing anything done elsewhere. Such possibilities are taken into account in the proposition-by-proposition assessment of Appendix 16B.
Nonetheless, aside from the necessary subjectivity of my ratings, there are sources of bias In the overall totals in Table 16C.4 that can be checked quantitatively.
One is that the studies are not equally important. Clearly, an analysis of 100 variables for all wars since 1812 is more significant than one of threats and trade in 1955. Also, even for studies of the same general importance, they may differ in relevance to the propositions. To provide an assessment of this, I classified in Table 16C.3 each study according to its importance. The totals for the studies rated important are given in Table 16C.4, and indicate that the only significant effect of segregating important evidence is to raise the, overall percentage of strongly positive evidence, from 35% to 40%.
An additional source of bias is that the evidence may be more or less direct. That is, results may bear directly on a proposition or may indirectly or by inference relate to a proposition. To enable the reader to control for this, whenever evidence was indirect or I inferred relevance to a proposition, I inserted an "I" in Table 16C.3 after the rating.
To determine the effect of this indirect evidence, all the totals were recounted for only the direct evidence. Table 16C.4 shows that overall the indirect evidence had little effect on the totals.
Another source of bias is that studies are not all independent. Many are on similar data sets, some on the same data, and some are research variations of basically the same analyses. To control for this, I eliminated from Table 16C.3 any analysis that did not provide reasonably independent evidence. Nonetheless, many studies remain that are from the same project or data sets, and therefore weight the overall totals.
There is also a small additional bias introduced by including the eight surveys, which are assessing overlapping literature. I included them, nonetheless, because they provide different perspectives on much the same results. And in the case of these surveys, I used without change their distillations of the basic findings in the literature.
Of course, there is also a possible source of bias in including my own analyses, which since 1965 have been explicitly designed with regard to the social field theory of international relations generating these propositions. There are 167 separate studies fisted in Table 16C.3; 35 are mine (Rummel, colleague and Rummel, Dimensionality of Nations, or Appendix I). This is 21% of the total, which is hardly sufficient to cause the distribution of overall results shown in Table 16C.4.
In any case, Table 16C.4 shows the total ratings for my analyses. Interestingly, my results are the least positive and most negative of all the categories, indicating that if there were bias for or against the propositions operating in my work, it probably was on the side of overly narrow or constraining research designs.
A source of bias also exists in the type of research designs underlying all the evidence. The propositions are stated within the context of a Model II perspective on state behavior: behavior is dyadic, directed by actor i to object j, and each actor is affected by his own perceptions and expectations. That is, different actors have different parameters weighting the forces toward behavior. Moreover, each actor's interest lie along the distance vectors between i and j. Many studies, however, employ a Model I perspective: the same numerical parameters affect different actor's behavior or use absolute distances instead of distance vectors.
Table 16C.4 shows the totals when all studies using a Model I approach or absolute distances are excluded: the evidence is then slightly more favorable and much less negative.
A final consideration. I evaluated the evidence as strongly positive, positive, and the like, depending on a number of considerations; such as the magnitude of results, degrees of freedom, and nature of the conflict variables. I surely may have tended to give positive evidence a higher ranking than negative evidence. There is an obvious conflict of interests in my doing this rating, which ideally should be done by one who strongly disagrees with the propositions to begin with: then if the distributions were to come out as in Table 16C.4, it would be even more impressive.
To at least see what would happen were I systematically biased, by hypothesis assume that I interpreted strongly negative results as negative, negative results as ambiguous, ambiguous results as positive, and positive as strongly positive. Assume also no strongly positive results. To see the consequences of such possible bias, Table 16C.4 reduces each rating one level. Even then the overall evidence still favors the propositions.
* Scanned from Appendix 16C in R.J. Rummel, War, Power, Peace, 1979. For full reference to the book and the list of its contents in hypertext, click book. Typographical errors have been corrected, clarifications added, and style updated.1. This is only in reference to some of Haas's many analyses. For specifics, see Rummel, "A Warning on Michael Haas's International Conflict" (1978a), which was written as a result of screening his analyses for this part.