Joseph
S. Lucas and Donald A. Yerxa, Editors
National
Origins | The Persistence of Nationalism | Tragedy
and the Pacific War
Japanese-Brazilians
| Honor and History | Historiography
of the American West | America and Europe
Dispatch
from Canada | Evolution
and War | Held Hostage | Letters
June
2002
Volume
III, Number 5
European
Nationalism and the Medieval Past[*]
by
Patrick Geary
Little
more than a decade ago, Europe
seemed to have finally escaped the early Middle Ages. The brave new world
of the European Union was supposed to free the continent at last from the
shadows of ancient antagonisms supposedly rooted in the age of migration,
when Europe’s
peoples with their distinctive ethnicities and rivalries first appeared.
Of course, over the last decade exactly the opposite happened. The creation
of new states in East and Central
Europe and the reemergence
of xenophobia in Western
Europe have returned
to prominence old myths of medieval national origins and ethnic enmity.
The early Middle Ages are back—with a vengeance. Some of this seems at
first glance harmless or simply amusing. Consider the “history” of Slovenia,
the newest and perhaps the most harmless of the states to emerge from the
corpse of Yugoslavia.
With a little help from medieval history, it turns out that Slovenia
is actually one of the most ancient nations in Europe.
According to one version, Slovenian political history dates back to the
6th century, when the first free principality of the ancient Slovenians
was established, “famous for its democratic institutions, legal system,
popular elections of dukes and progressive legal rights for women.”
One could not ask for a more progressive past on which to build a better
future!
However,
other medieval myths are more ominous: Jean-Marie Le Pen, who recently
won 17% of the votes cast for president of France,
proclaimed that the French people were born with the baptism of Clovis
in 496. Implicit in such a history of course is that Jews and Muslims cannot
be “real” French. One may detect the same sort of ethnic nationalism across
the Rhine
when the Leitkultur of Germany
is proclaimed as the standard for judging who is really German. Across Europe,
history is being mobilized to legitimize national politics and medieval
history has a prominent place in this mobilization.
If
all this playing with history seems familiar, it should: “Scientific medieval
history,” the marriage of philology and source-critical historical investigation,
first arose as part of the nationalist enterprise in Germany as scholars
joined politicians to find an historical basis for German national identity.
Beginning with the Gesellschaft für älterer deutsche Geschichtskunde,
founded in 1819, and the publication of its Monumenta Germaniae Historica,
historians, philologists, and archaeologists collaborated to identify language
groups and to project these back into history, where they could be identified
with the peoples historical sources described as migrating into the Roman
empire at the end of Antiquity. Later in the 19th century archaeologists
such as Gustaf Kossinna identified patterns of material culture with these
linguistic and political groups, making it possible to map Europe’s peoples
before, during, and after their definitive settlement within their historic
homelands.
These
tools of German nationalism, philological analysis and archaeology, not
only created German history but also by implication created all history.
They were a readily exportable package, easily applied to any corpus of
texts in any language that could be mapped onto archaeological.0 material
of an “ethnic” nature. Moreover, since German standards of scientific historical
scholarship increasingly dominated the 19th-century universities of Europe
and even America, foreign historians trained in the German seminar method
and critical scholarship served as ambassadors of nationalistic analysis
when they returned to their own countries.
The
historical method was equally seductive to peoples in search of states
and to states in search of peoples. Germans sought a state to embody and
extend the unique identity of the German people. France, with a long tradition
of state continuity, looked to history to find a people whom the restored
French state could embody. In between were the numerous interest groups,
Serbs, Croats, Slovenes, Czechs, Basques, Britons, and others, who wanted
proof that they, too, had the right to sovereignty, a right based in no
small part on the historical claims of the distant past.
The
philologically-based scientific history drafted into the service of nationalism
led back ultimately to the period between the 3rd and 11th centuries. The
period between the disappearance of the Roman Empire and the formation
of recognizable polities became the crucial terrain for the establishment
of nationalist claims. Here was to be found the moment of “primary acquisition”
when the ancestors of modern nations—speaking their national languages
which carried and expressed specific cultural and i n t e l l e c t u a
l m o d e s — f i r s t appeared in Europe, conquering for once and for
all their sacred and immutable territories and often, in so doing, establishing
for once and for all their natural enemies. Essential in such a vision
of nations was its “scientific”— one might well say “genetic”—nature: ethnic
identity was an objective fact of nature, an accident of birth from which
one could not escape.
In
the progression toward nationalist political movements, medievalists were
in the forefront of the so-called “awakened” intellectuals who “recovered”
the history, language, and culture of their people by showing continuities
between the language of ancient texts, the deeds of ancient kings and warriors,
and the struggling peoples of the present. Across Europe, a spectrum of
intellectuals—ranging from serious scholars such as Georg Heinrich Pertz
(1795–1876), director of the Monumenta Germaniae Historica, to the
Croatian nationalist historian Eugen Kvaternik (1825–1871) and the English
racist historian Edward Augustus Freeman (1823–1892)—transmitted this recovered
history to patriots, who in turn disseminated it throughout society, creating
mass national ideologies. The central role of historians and philologists
in this nationalist enterprise created the glory days of medieval studies
in Europe and accounts for the publication of texts, establishment of university
chairs, and ensured readership and public support for scholars.
Recently,
medievalists have slipped from the center of historical concern along with
the national visions that they once espoused. Today many scholars reject
such views of the nationstate or of ethnic groups. We are told by Benedict
Anderson and others that the nation-state is an imagined community, not
rediscovered but called into being by 19thcentury intellectuals and politicians.
Following Eric Hobsbawm, many modernists dismiss nationalist versions of
history as “invented traditions” and scoff at their naiveté. Such
an approach allows modern historians to ignore history prior to 1800, but
it does not tell us how to deal with the prehistory of the nation-state.
After all, the imagining was not done ex nihilo, but using texts
and historical materials that are real and that continue to weigh on Europeans.
As we see and hear, these 19th-century medievalists did their job as educators
very well. The use of myths such as those of the Slovenes or the Franks
to justify the present through a medieval past are proof that the likes
of Benedict Anderson or Eric Hobsbawm have had little effect on how societies
understand themselves. A new generation of ethnic nationalists finds that
the old model of history still resonates with millions of Europeans.
Nor
are all scholars ready to abandon the ancient histories of peoples as merely
invented traditions. Many sociologists and historians, often termed primordialists,
believe that while nationalism may be recent, nations, understood as ethnic
communities united in shared myths of descent, a common history, and a
distinctive shared culture (one might almost say Leitkultur) are
indeed just as old as the 19th century suggested. They can point to the
same evidence as their nationalist predecessors: to the contemporary descriptions
of peoples, their customs, religions, and political movements as well as
to the archaeological record of settlement and conquest. And they can bolster
their arguments by applying the scientific tools of historical, philological,
and archaeological analysis that for almost two hundred years have been
the hallmarks of serious historical study.
Both
the old nationalists and the new primordialists pose a powerful challenge
to contemporary scholars who are both unable to dismiss the past as simply
invented and unwilling to become the tools of nationalism once more. But
then how are we to imagine peoples in the early Middle Ages, and how are
we to approach their histories without using the tools of philology and
historical analysis developed by our predecessors?
A
start is to treat the evidence of the past seriously, but to be aware of
the biases and ideologies of the people who produced it. Peoples certainly
existed in the migration era, and their bonds were important historical
realities. However, we should reject the image of migration era peoples
as the homogeneous linguistic, cultural, and thus ethnic units so
dear to nationalists and primordialists alike. The image is one invented
in Antiquity by Greeks and especially Romans, who recognized the heterogeneity,
complexity, and malleability of their own societies but saw “barbarians”
as ahistorical and immutable, part of the natural world rather than the
world of human history. It is more appropriate to think of what our sources
term “peoples” as constitutional rather than biological or even cultural
units. Membership in a people was never simply a question of language,
cultural heritage, or common origin. From the Goths who governed Italy
and Spain to the Huns, Franks, Saxons, Alemanni, and Slavs, the groups
that appeared in Europe at the end of Antiquity were composed of people
with different languages, cultures, geographic origins, and political traditions.
One might term them more programs or parties than peoples, their names
and their leaders rallying points in shifting political and military alliances.
Indeed, these identities could change rapidly with the fortunes of war
and the advantages of alliances. What masqueraded as shared histories and
common cultures were less descriptions of reality than claims and programs
presented by their leaders in their efforts to garner support, very much
as nationalist politicians do today. Those who chose to accept these traditions
as their own, who fought alongside these leaders, could become members
of a new people, without necessarily abandoning other simultaneous or even
conflicting identities that could reemerge when advantageous.
Second,
rather than searching for the continuities linking the migration period
to the present, we should understand the essentially discontinuous nature
of ethnic identity. Nations and ethnic groups could and did change what
appeared to be their essential characteristics, including language, religion,
modes of warfare, political traditions, customs, and even histories. The
heterogeneous makeup of these groups facilitated such transformations,
since different options always existed within these societies, and they
remained, even in spite of their explicit ideologies, open to recruitment
from without. Homogeneity never really characterized these groups as completely
as kings or their propagandists pretended. Goths could become Huns and
then again Goths; Romans could become Franks. Franks could abandon their
languages and religions for Latin and Christianity. Shared legends of origin
could be contradictory or multiple, adjusted but not necessarily believed
from generation to generation.
Third,
neither of these processes should be assumed to have ended with territorialization
or some “moment of primary acquisition” of a homeland. European peoples
continue to be works in progress: new ones appear and old ones transform;
names endure but the content of these names changes. Whatever the Leitkultur
of Germany or France (or Slovenia) is today is certainly not what it was
a thousand years ago, and just as certainly it will not be the same in
the future. Peoples are processes, they always have been, and they always
will be. History is not some moment in the past that justifies the present
and tells us what the future must be. It is an ever-changing flood in which
we continue to be a part.
Patrick
J. Geary is professor of history at the University of California, Los Angeles.
His most recent book is The Myth of Nations: The Medieval Origins of Europe
(Princeton University Press, 2001), from which this essay is adapted.
Join
the Historical Society and subscribe to Historically Speaking
A
World of Nations, After All
by
Richard Blanke
Historians
trying to make sense of difficult national problems often turn for analytical
and contextual assistance to the extensive theoretical literature on nationalism,
most of it of social-science provenance. As often
as not, they come away disappointed. At least, this has been my
experience, whether the specific problem was the ability of Prussian Poles
to maintain their position against a supposedly powerful German Empire,
or the peculiarly difficult situation that faced the one million Germans
consigned to resurrected Poland after World War I, or the phenomenon of
“Polish-speaking Germans” who insisted on a national identity that ran
counter to their native language, or the complex of emotions, experiences,
and rationalizations that made possible history’s greatest ethnic-cleansing
operation: the removal of approximately 17 million Germans, mostly from
eastern regions of their own country, in and after 1945.
These
are just four of the many historical examples that demonstrate the force
and centrality of nationalism; none of them has been persuasively analyzed,
accounted for, or even addressed by the most influential works on nationalism
published since the 1960s. As for national developments in Eastern
Europe since the collapse of Communism, steeping oneself
in this literature before 1990 would have provided no better preparation
for what actually happened than a major in Soviet Studies prepared one
for the collapse itself. But while the methodological assumptions of Soviet
Studies have been subjected to a great deal of justified criticism, the
field of “Nationalism Studies” does not seem to have undergone a comparable
critique (or self-critique).
When
we turn from contemporary nationalist theory to the contemporary world,
this is what we find: first, that nationalism itself (in the general sense
that most people identify with political or cultural communities called
nations, whose interests and values take precedence over other forms of
social organization, and believe that national communities and political
units should coincide where possible) remains the world’s primary political
ordering principle. For most individuals, the nation continues to function
as Rupert Emerson’s “terminal community,” the largest community that, when
the chips are down, commands their loyalty. National identities and loyalties—
in some parts of the world these are still called “ethnic” or “tribal”—demonstrate
a continuing ability to defy the most powerful regimes and to confound
the most sophisticated elites; to prevail over the competing demands of
traditional polity, social class, economic interest, multi-national construct,
topography, and common sense; and to engender levels of individual commitment
and group action about which advocates of class or gender solidarity can
only fantasize. Only religion demonstrates, in some situations, a comparable
appeal. And once-confident pronouncements by some theorists that nationalism
was a passing phase in human history whose hold was already weakening have
been rudely challenged by recent developments, and not just in Eastern
Europe.
Second,
it is hard to ignore the fact that language and ethnicity, while they may
not underlie all forms of national assertiveness, continue to provide the
foundation for most expressions of nationalism. To be sure, some successful
nation-states have arisen on otherthan- ethnic foundations, notably in
Western Europe. But we should not forget that such states were once common
to Eastern Europe as well; except that they have disappeared from that
region. The classic examples were the multi-national Russian, Austro-Hungarian,
and Ottoman empires, which gave way to a new system of alleged nation-states
after World War I. But some of the new states, e.g., the Soviet Union,
Czechoslovakia, and Yugoslavia, were also multi-national constructions,
and now they too have capitulated to the forces of ethnic nationalism.
And this second wave of destruction of multi-ethnic polities may be even
more instructive than the first, for these latter states (and the Soviet
Union most obviously) had very powerful means of persuasion at their disposal,
and did not hesitate to apply them. Nonetheless, Europe consists today,
for the first time in its history, almost exclusively of nation-states.
Third,
while few would argue (as some did in the 19th century) that language necessarily
prefigures national identity, it does seem to function in most cases as
the leading indicator of ethnic nationality, if not its synonym. To be
sure, while a common language may invite people to see themselves as a
national group, it clearly does not oblige them to do so. Other considerations,
e.g., differences of religion, may take precedence. In today's Europe,
Croats, Bosnians, and Serbs, who speak a common language but are divided
by religion and history, are the obvious example. Overall, however, and
especially in Europe east of the Rhine, language coincides with national
identity to a historically unprecedented degree. And with just a few exceptions
(Basques, Catalans, and several peoples of the Russian Federation, all
with substantial autonomy), every self-defined European language group
with one million or more members now has a state of its own; a singular,
and yet little-remarked development.
The
problem is that each of these three outcomes would probably have come as
a surprise to someone whose understanding of nationalism derived primarily
from the theoretical literature that has appeared during the past quarter-century.
Pending a better explanation for the substantial disconnect between the
world of nationalist theory and the nationalist world in which we live,
I would like to suggest that it has something to do with the recent popularity
of “instrumentalist” approaches to national identity and nationalism. “Instrumentalism”
is not a formal affiliation, of course; and while most contemporary work
on nationalism reflects instrumentalist (or related post-modernist) fashion
to some degree, many authors might be uncomfortable with the label. For
this reason (and because space limitations preclude an adequately detailed
discussion of individual works), what follows pertains more to the general
posture than to individual scholars (including the several who are cited).
The
gist of the instrumentalist view is that nationalism is best understood,
not as a quasinatural or autonomous force, but as a contingent mean to
other ends—summoned, shaped, manipulated, and sometimes even created out
of whole cloth by states and/or elites. Instrumentalists typically emphasize
the mythical, packaged, and top-down character of the history and ethnic
traditions that have served as the foundation for national consciousness;
and thus the artificial, constructed, and ephemeral quality of national
identity; and thus the contingency, voluntarism, and spurious assumptions
of ethnic nationalism itself. They seek the origin of modern nations, not
in an (imaginary) ethnocultural inheritance, but in the minds of a nationalist
elite; as what Benedict Anderson has famously termed an “imagined political
community.” Thus it was the idea of nationalism that first suggested a
supposedly antecedent nation, which image was then imposed upon the general
population in order to bind it together in a common social or political
purpose. Language, religion, and ethnic culture might have served, writes
Paul Brass, as “raw materials for the intellectual project of nationality,”
but they should not be “naturalized,” granted an independent existence,
or treated as the autonomous authors of major historical developments.[1]
To be sure, national consciousness generated in this manner may well become
internalized over time, and provide people with what Ronald Suny calls
“a provisional stabilization of a sense of self or group,” but we (and
they) should not overlook its “historical construction (and) provisionality.” [2]
Moreover, adds Rogers Brubaker, “national feeling is less strong, national
identity less salient, . . . than is often assumed.”[3]
And the nation itself remains essentially a political fiction, even where
it seems to have been “realized;” for we live not in a “world of nations,”
but only in a world where the idea of nation happens to be “widely available
. . . as a category,” “nationhood” is “pervasively institutionalized,”
and “nationness” may “suddenly and powerfully happen.”[4]
In other words, the thrust of the instrumentalist approach is to discount
and relativize national identity and its putative ethnic substructure by
way of challenging the underlying force and authenticity of nationalism
itself.
Other
students of nationalism, now clearly a minority, continue to acknowledge
ethnic identity and national consciousness, whatever their ontological
status, as significant historical forces that are simply too widespread
and persistent to attribute to the conjuring ability of elites. These scholars
are classified—mainly by those who disagree with them—as “primordialists,”
although none of those to whom this label is sometimes applied (Anthony
Smith, Miroslav Hroch, Walker Connor, John Armstrong) contends that national
identity is literally primordial, i.e., “existing from the beginning; underived;
a part of creation.” They do note the obvious, however: that ethnicnational
consciousness has arisen in a great variety of places over an extended
period of time; that it has often been central to the development of nationalism
as a political force; that it has frequently manifested itself independently
(if not in the face) of political elites and changing intellectual fashion;
and that it is probably no more “imagined” or “invented” than any other
aspect of the cultural environment in which people live. True, ethnic nationalism
has often begun as a minority movement, but that hardly disproves the importance
of ethnicity. Rather, as Miroslav Hroch argues, elites can “invent” nations
only where “certain objective preconditions for the formation of a nation
already exist.”[5]
The
most important of these preconditions is an ethnocultural substructure
that may be affected by, but is rarely altogether the creation of, purposeful
state activity. (It is called “objective” only in the sense that it exists
independently of individual perception and volition and can be recognized
and described by others.) It is not the only source of national identity,
of course: in addition to “ethnicobjective” sources of national identity,
there are others that are usually called political or “subjective.” Some
nations have clearly evolved more on the basis of political conditions
than of pre-existing communities of language or culture; and even where
such community exists today, it may be more the result than the cause of
the original act of nation-state creation. Thus in some cases the state
has come first, and then produced the nation; for example, by imposing
an official language, establishing schools to teach a common history, processing
much of the male population through a national army, and introducing infra-structural
improvements and economic development to create a sense of common material
interest as well; which process receives quasi-classic treatment in Eugen
Weber’s account of how “peasants” were made into “Frenchmen.”
The
resulting sense of national identity is called “political-subjective” because
it is ultimately a function of individual consciousness and volition, molded
by a particular political environment. It has characterized some of the
most prominent nation-states, including Britain, France, Spain, Canada,
and the United States; for which reason theorists of nationalism are clearly
right to insist, as virtually all of them now do, that national identity
cannot be understood simply as a function of ethnicity. And yet, the notion
of nationality divorced from ethnicity—as essentially or exclusively a
matter of subjective sentiment shaped by political circumstances—also presents
some problems. For one thing, national identity of the subjective-political
type can only be perceived and analyzed a posteriori: one cannot
say when or where it will occur until it has already done so, for the theoretical
literature provides little reliable information about the circumstances
under which an effort to generate a subjective national identity under
ethnically indeterminate conditions can be expected to succeed or fail.
Moreover, the ethnic-objective vs. political- subjective dichotomy is to
some extent false, for most modern nations incorporate a mix of ethnocultural
foundations, political traditions, and subjective sentiment. Even nationstates
of the political-subjective type usually rest on particular cultural underpinnings;
and while multiculturalists do not like to be reminded of it, the classic
political nations of North America also had/have at their core a majority
population of considerable ethnocultural coherence, without which their
political evolution is hardly comprehensible.
More
to the point, however, while the instrumentalist approach to national identity
may be suggestive, sophisticated, personally satisfying, and even stand
to reason, recent history offers little empirical support for the notion
that states or elites can actually construct nations. Individuals may adjust
their national orientation for personal reasons—as they commonly do after
emigrating to a coun- try with a different national culture. But the record
of “nation-building” efforts across ethnic lines, including the spectacular
failure of some very powerful states in our own time in the face of a supposedly
constructed ethnicity, suggests that the instrumentalists have rather overestimated
the ability of states and elites to create the kind of national identities
they want and seriously underestimated the strength and persistence of
ethnicity as a stubbornly independent and highly resistant historical variable.
This may also explain why some of them were so wrong-footed by the recent
resurgence of nationalism in Eastern Europe; e.g., John Breuilly, who concluded
previously that nationalism's “day has largely passed;” and Eric Hobsbawm,
who contended as late as 1990 that nationalism was becoming “historically
less important,” was probably “past its peak,” and would play a more “subordinate
and often minor role” in the future.[6]
Elsewhere
in the world as well, it is increasingly difficult to find multi-ethnic
states, especially if they are less than a century old, that seem sure
to prevail in the long run over the disintegrating pressures of ethnic
nationalism. This may explain why theorists of nationalism whose primary
interest is Africa or Asia— where lack of congruence between state and
“nation” remains the norm and rulers of political houses built on the sand
of ethnic heterogeneity struggle against the odds to instill new national
identities of the political-subjective type—incline most strongly toward
the instrumentalist position. But in the light of the recent experience
of Eastern Europe, the assurance that “routine allegiance to a civil state,
supplemented by government's police powers and ideological exhortation,”
has become an adequate replacement for ethnic cohesion[7]
has a certain whistling-past-thegraveyard quality. There is also the dubious,
but clearly implied corollary of the instrumentalist position: if national
identity is mainly constructed by states or elites, it follows that it
should be “deconstructable” by similar means; in other words, national
identity should be a lot more mutable than the experience of the last 150
years in Central and Eastern European suggests. Official languages, monolingual
public schools, universal conscription, a pervasive bureaucracy, and infra-structural
improvements may have made “Frenchmen (or whatever) out of peasants,” but
such methods have rarely been able to make Frenchmen (or whatever) out
of people of an altogether different nationality. Indeed, the ultimate
ineffectiveness of most exercises in “integral” nationalism, as typified
by Imperial Germany or interwar Poland, was doubtless an important reason
for the recourse of some states to the subsequent horrors of coerced emigration,
genocide, and ethnic cleansing.
The
late Isaiah Berlin offered one plausible explanation for the tendency of
so many contemporary students of nationalism to discount and underestimate
the essential character, historical force, and continuing appeal of their
chosen subject: they may be guided more by personal and class predilections
than by a decent regard for the historical evidence.[8]
As members of the class that has traditionally placed the greatest faith
in human reason (i.e., the intelligentsia), they prefer the idea of national
identity rooted in a civil polity of informed citizens (possessed of at
least some individual choice in the matter) to an ethnic nationality conceived
as an irrational force rooted in a primordial tribal order, with little
room for individual freedom of choice. In other words, nationalism of the
political-subjective type is okay, for it has also been associated historically
with liberalism, sought popular as well as national emancipation, and had
an internationalist dimension, seeking the emancipation of people in general
and not just one’s own. But ethnic-objective nationalism has been associated
more with conservatism (and so is not okay); and its reliance on the vagaries
of customs and language rather than the explicit laws and institutions
of territorial nations also makes it seem less “safe.” And the suggestion
that ethnic nationality was the only legitimate basis for states presaged
not just the radical reorganization of Europe but an epidemic of insurrection
and war; as was indeed born out by nationalism’s role in the wars of the
20th century—not to mention the havoc wrought by states that actually tried
to make themselves into ethnic-national communities rather than civil societies.
Berlin’s
point, however, is that scholars should not allow their own disapproval
of a historical development to lead them to underestimate its force. Much
the same point is made by the sociolinguist Joshua Fishman, who describes
“guided, exploited, mobilized ethnicity” as one of the “great destabilizers
of the status quo so precious to all establishments;” for which reason
“no one, it seems, likes language loyalty movements, unless they or their
favorite causes can profit or gain from them. And yet such movements abound,
and their end is not in sight.”[9]
Meanwhile, for the historian looking to contemporary nationalist theory
for help with some of the past century’s most destructive national conflicts,
it is as though he has just come upon a serious automobile accident—many
injured, the cars wrecked—only to find a traffic expert explaining that
people only imagine that they need to drive; they did not drive at all
until a century ago; they are actually much less devoted to driving than
is generally believed; and they are already beginning to drive less in
favor of public transportation. But the fact is that most people drive,
and sometimes they have accidents. Most people also possess a reasonably
welldeveloped and persistent national consciousness, which sometimes results
in national conflict; and this remains both a fact of modern life and the
source of a lot of modern history.
Richard
Blanke is Adelaide & Alan Bird Professor of History at the University
of Maine. His most recent book is Polish-speaking Germans? Language and
National Identity among the Masurians (Böhlau-Verlag, 2001).
[1]Brass,
Ethnicity and Nationalism (Sage, 1991), 15.
[2]Suny,
“Constructing Primordialism: Old Histories for New Nations,” Journal of
Modern History 73 (2001): 865f.
[3]Brubaker,
“Myths and Misconceptions in the Study of Nationalism,” in The State of
the Nation, ed. John Hall (Cambridge University Press, 1998), 273f.
[4]Brubaker,
Nationalism Reframed (Cambridge University Press, 1996), 16ff.
[5]Hroch,
“From National Movement to the Fully Formed Nation: The Nation-Building
Process in Europe,” New Left Review, no.198 (March/April 1993): 4.
[6]Breuilly,
Nationalism and the State (University of Chicago Press, 1985), 352; Hobsbawm,
Nations and Nationalism since 1780 (Cambridge University Press, 1990),
181ff.
[7]Clifford
Geertz, in Old Societies and New States (Free Press, 1963),110.
[8]Berlin,
“Nationalism,” Partisan Review 46 (1979): 337–358.
[9]Fishman,
Language and Ethnicity in Minority Sociolinguistic Perspective (Multilingual
Matters, 1989), 218.
Join
the Historical Society and subscribe to Historically Speaking
No
Quarter: The Pacific Battlefield
by
Eric Bergerud
World
War II was, by a great margin, the most violent conflict in modern times.
Yet Japanese and American soldiers fought the land war in the Pacific with
a savage and relentless intensity that was rarely equaled and never surpassed
in World War II. Although it is possible to identify incidents of restraint
and humanity coming from both sides, the essence of the conflict was something
very close to a war of annihilation.
The
grim nature of the Pacific War is best illustrated by the extremely low
number of prisoners taken. The numbers tell a dismal tale. Approximately
37,000 Japanese servicemen surrendered during land operations in the Pacific
War. This was opposed to a total of killed in action that surpassed one
million. In other words, for every Japanese soldier or sailor fighting
on land (Japan
had no Marine Corps and the Naval personnel were involved in all major
ground campaigns) that surrendered, there were twenty-five killed. In comparison,
in northwest Europe
both Britain
and U.S.
ground forces lost one prisoner for every two men killed. But the numbers
are even worse than they seem. A very high percentage of Japanese POWs
surrendered in the waning months of the Pacific War in areas such as the
South Pacific or the Philippines
where imperial garrisons had long been isolated. Men were starving, in
despair, and military discipline gone. What was absent, however, was an
organized surrender of a large number of prisoners during or at the end
of any of the major land campaigns, the kind of occurrence that was typical
in every other theater of war. After the surrenders of several garrisons
early in 1942, almost no American soldiers or Marines became prisoners
and lived to tell the tale. In short, when the Japanese and Americans squared
off in battle there was no quarter asked or given.
The
most common explanation for what John Dower called “War without Mercy”
is racial hatred between the Japanese and Americans. Although Japan
receives a share of the blame, deeply ingrained Western racism, as manifested
in the United
States,
is found to be the prime culprit. Scholars attempt to prove their point
with a cavalcade of propaganda posters, films, cartoons, and articles in
the press. The Japanese were systematically portrayed as animals; consequently,
so goes the argument, extermination was a natural response to Japan's
attack. Many now argue that wartime propaganda campaigns in the West were
only one manifestation among many of an old pattern of racist behavior
that created the slave trade, underlay colonial empires, and led to the
incarceration of thousands of Japanese citizens in the United
States
in 1942. I don't doubt that ethnic loathing was part of the equation. However,
for a number of reasons, I find this argument incomplete.
One
problem is obvious. If the murderous nature of combat in the Pacific was
generated by American or Western racial attitudes toward Asians, it is
very difficult to explain why U.S. forces took huge numbers of POWs in
the Korean War. If racial enlightenment had taken place or the Rules of
Engagement within the U.S. military had changed between 1945 and 1950,
I find it difficult to detect. Likewise, in Vietnam Americans captured
enemy soldiers in numbers that dwarfed those of the Pacific War.
We
should also remember that there was no history of conflict between Japan
and the United States. Nor were American soldiers fighting for the physical
survival of the United States. Many in America’s intellectual class had
long found much to admire in Japan, although Japanese aggression against
China did much to tarnish Tokyo’s image. More important, the segment in
the American political arena that wished either to enter World War II or
become more actively involved had its eyes on the struggle against Hitler.
Although conspiracy theorists continue to try to prove otherwise, I am
convinced that Washington very much wanted peace in Asia as long as Japan
did not threaten Western interests in Southeast Asia.
I doubt
such calculations had much impact on the young men swept up into war. Eighteen-
or twenty-year-olds of 1941 were no less apolitical or self-possessed than
their contemporaries of today. I have spent the last seven years researching
the Pacific War and have interviewed about two hundred veterans of all
services, mostly American. I asked each about their attitudes toward the
Japanese. When describing their wartime service, almost all expressed retrospective
hatred: many admitted that the hatred had not totally cooled over a half-century.
Interestingly, however, many G.I.s pointed out that prior to service in
the Pacific they had never met a Japanese person or an American citizen
of Japanese descent. These same men admitted that they knew almost nothing
about Japan. Obviously, Pearl Harbor changed this situation overnight.
It is also clear, in my view, that these men learned their hatred not at
home but on the battlefield.
It
is important to realize that American servicemen heading to war in 1942
lacked any systematic political indoctrination. Whatever racial attitudes
existed in society there was no “hate Japan” curriculum in the public schools.
Prior to Pearl Harbor political indoctrination within the armed services
was forbidden because it would arouse the ire of isolationists. The young
men who swamped recruiters after December 7 received the most cursory of
training before being shipped out to destinations like Guadalcanal. (Some
Marines I interviewed recall receiving weapons training onboard troop transports
en route to the South Pacific.) In such conditions there was no time for
indoctrination or systematic propaganda. By the time the U.S. government,
aided by Hollywood, did create a formidable propaganda machine, the Pacific
battlefield had already been poisoned by bitter experience, not political
manipulation.
The
war of annihilation that marked the Pacific War resulted from unique battlefield
dynamics. In other words, the slaughter was homegrown in the Pacific jungles
rather than a reflection of outside social influences.
To
understand this violent dynamic, it is important to understand how surrender
is handled on the battlefield. At that time, the concept of surrender was
accepted by every army in the world. Even Imperial Japanese forces took
prisoners. This reflected tacit recognition that when violence was separated
from military purpose, it became either murder or suicide. It also stemmed
from powerful selfinterest. If a soldier knows the enemy will take prisoners,
he is far more likely to give up. If he believes he will die regardless,
he fights on. Most armies wish to end battles as quickly as possible with
minimal losses and welcome surrender. They realize, however, that this
is a reciprocal relationship. If one side takes prisoners, so must the
other.
In
practice, surrender entails great danger on the battlefield. Surrender
is much safer if it is done by several people at once, and with some type
of prearrangement. Once fighting starts, the situation changes drastically.
If machine gunners start a fight, inflict casualties on the other side,
and then decide they wish to surrender, they are facing likely death. If
they are powerful enough, the enemy might accept surrender out of simple
self-preservation. If not, the code of battle allows men to take retribution.
If one side has committed an atrocity, the chances for safe surrender by
its soldiers also decline greatly. Consequently, surrender should be viewed
as a tacit pact. It is done to avoid mutual violence and breaks down in
the midst of bloodshed. This is true in all wars. Killing the helpless
was not unique to the Pacific.
The
most remarkable behaviors shown by Japanese soldiers were their willingness
to accept orders that meant certain death and their refusal to surrender.
To what extent the Japanese soldier’s willingness to recklessly embrace
death reflected something deep in Japanese culture I will let others judge.
However, it is undeniable that a Japanese youth in 1941, very much unlike
his American counterpart, had been subject to intense military indoctrination
in and out of the education system. Present in some form from the start
of the Meiji era, the propaganda barrage reached a fever pitch in the late
1930s. In short, every Japanese soldier was imbued with a kind of ersatz
bushido that bound the individual to the state and glorified death in battle
as the supreme act of sacrifice and spiritual purification.
Every
Japanese serviceman possessed a copy of the Emperor Meiji’s famous Imperial
Edict of 1882. It contains a striking image. The cherry blossom, beloved
of the Japanese, falls to earth in perfect form. The Edict counsels: “If
someone should enquire of you concerning the spirit of the Japanese, point
to the wild cherry blossom shining in the sun.” Thus, the Japanese honored
the sanctity of the death of the young in battle. The death of the young
is one face of war. Unfortunately for all concerned, the Japanese veneration
of death was unique and came dangerously close to becoming a cult of oblivion.
It struck at the very nature of the warrior code as understood in the West.
The Japanese viewed the idea of surrender, accepted widely in the West,
as a sign of weakness. Unfortunately, if Japanese officers did not hallow
the lives of their own soldiers, they likewise showed a contempt for the
lives of the foe. It was this terrible chemistry that made Pacific battlefields
killing grounds of unusual ferocity.
American
soldiers learned very quickly that combat in the Pacific would be unlike
that engaged in during any previous war. Pearl Harbor itself had enraged
the nation and was living proof that the Japanese could not be trusted.
As the first American expeditionary force of the Pacific War headed to
Guadalcanal, rumors were already circulating of Japanese cruelty in the
Philippines and on Wake Island. Officers were also telling their young
soldiers that the Japanese did not surrender. Japanese victories had also
given American soldiers a healthy respect for their enemies’ fighting skills.
Thus fear also was added to the brew.
Dire
predictions of a brutal war proved true in America’s first two Pacific
campaigns— Guadalcanal and Buna. In both of these campaigns —fought nearly
simultaneously during the fall of 1942 through the early winter of 1943—an
alarming pattern developed. Japanese forces showed astounding courage both
during attack and defense. However, Japanese tactical doctrine, which relied
so heavily on the fanatical spirit of the individual infantryman, ultimately
proved wanting in the face of Allied fighting skill and superior firepower.
Now
it is safe to say that any general wants his army to fight with courage
in the face of bad odds. The Japanese soldier fulfilled this duty to the
fullest in the Solomons and in New Guinea. In both battles Imperial forces
inflicted serious losses and cost the Allies valuable time. Unfortunately,
during the concluding stage of both battles the Japanese battle ethos degenerated
into a completely pointless waste of life, most of it Japanese. By January
1943, the Japanese were facing a hopeless position at both Guadalcanal
and Buna. Recognizing this, the Japanese high command evacuated some 12,000
men from Guadalcanal. However, this still left several hundred Japanese
infantry manning positions in a ridgeline south of Henderson Field that
the Americans called Mount Austen. These soldiers were isolated, starving,
and in miserable physical condition. They had done their duty and should
have surrendered. Instead, in every case, these garrisons fought to the
last man, often ending resistance in a pointless but terrifying “banzai”
charge. Many Japanese survivors, too weak to fight, set off their own grenades,
vowing to meet their comrades at the Yasakuni Shrine.
The
end game near Buna was even more forbidding. After bleeding Australian
and American units dry for three months, the Japanese outposts near Buna
began to fall apart. In the last days scattered Japanese units made meaningless
attacks, and scores of Japanese soldiers committed suicide. Few incidents
during the Pacific war were more pitiful or more tragic than the end of
the Japanese resistance on January 21,1943. After withstanding heavy pressure
from American and crack Australian infantry, a disintegrating Japanese
battalion north of Buna was attacked by four companies of the U.S. 41st
Division. Although attacked by a force no larger than those beaten off
many times before, the Japanese perimeter, which at this time was deep
inside Allied lines, simply fell apart. Allied artillery and mortars pounded
the position in the morning. American infantry, without the aid of tanks,
penetrated the perimeter quickly. Perhaps dazed by the bombardment or simply
exhausted, Japanese infantry wandered in the open as American soldiers
shot them down. In the words of one soldier, “We caught most of the Japs
still underground or trying to extricate themselves from shattered bunkers.
The garrison panicked and ran up Sanananda Road across our line of fire.
We had great killing.” At the end of the day the Americans counted 520
dead, one of the bloodiest days endured by the Japanese Army up until that
time. American losses were six killed. What made this incident so particularly
wretched is that a large, wellgarrisoned perimeter was exactly the type
of position that could have arranged an orderly and relatively safe surrender.
Fighting in a hopeless position, almost as many Japanese soldiers perished
in that single spot in a few hours as the Americans had lost in the entire
Buna campaign. Instead of honor, the Japanese chose death. In doing so
they taught yet another division that Japanese soldiers would not surrender
and added fire to the lethal momentum already building.
The
pattern seen first at Guadalcanal and Buna was repeated again and again
in the Pacific War. The Allies would attack. The Japanese would fight with
great courage and tactical skill, although not always with great strategic
wisdom. American forces suffered painful losses, and most campaigns dragged
on well past the date predicted by the U.S. commanders for victory. However,
at some point, American firepower would begin to take its toll, and the
Japanese resistance would begin to disintegrate into a macabre and senseless
death orgy during which almost all victims were Japanese. So the tragedy
at Buna was repeated often on a much larger scale throughout the Pacific
War. Among the most horrid examples were the Cliffs of Death on Saipan,
the Meat Grinder on Iwo Jima, and the Suicide Caves on Okinawa.
As
American troops on Guadalcanal learned first-hand that the Japanese would
not surrender, they also learned a related and even more painful lesson:
an attempt to surrender on the part of a Japanese soldier might actually
be a ruse designed to enable the Japanese to take an American with him
on his journey into death. The most dramatic such occurrence was an incident
forgotten today but, at the time, known by every American soldier in the
South Pacific—the infamous Goettge Patrol. Immediately after the American
landing on Guadalcanal, a handful of Japanese and Korean construction laborers
surrendered. One of the Japanese told the Marines that others in his unit
wanted to surrender. LTC Frank Goettge, the First Marine Division’s intelligence
officer, convinced commanding General Vandegrift to allow him to take a
twenty-five man patrol up the coast and arrange the surrender. Reluctantly,
Vandegrift gave his permission, and the patrol left the next morning. By
evening one survivor had made it back to American lines. It is very possible
that the Goettge Patrol perished due to tactical incompetence and not Japanese
design. However, to every Marine it was a dramatic example that the Japanese
were ruthless and treacherous. It was known to be difficult to get Japanese
surrenders: from then on it was also believed to be dangerous. The outcome
was that many American soldiers simply didn’t try.
Dozens
of veterans described small-scale versions of the Goettge Patrol that they
witnessed in the Pacific. Lou Marvellous was a squad leader with the First
Marine Division and gave interesting commentary on the fear and confusion
caused by the Japanese:
I have
thought about the Japanese for fifty years. I had a high regard for Japanese
soldiers. They were brave, tenacious, sly, and very good at pulling off
the unexpected. But in another way, I feel that many of the things they
did were simply stupid. They sacrificed their own men needlessly, for no
purpose at all. During a battle along the Matanikau three or four were
straggling toward us as though they were going to surrender. There must
have been a dozen of us with a bead on them. Sure enough, one bent over
and there was a carbine or sub-machine gun slung on his back that his comrade
tried to grab. We shot them down instantly. Later we were out on a large
operation. There were maybe a hundred of us. Suddenly, one Japanese officer
comes charging out of the jungle screaming and waving his sword. We riddled
him. What did he accomplish? He was only one man. What could he hope to
accomplish? They did this type of thing so many times. It got to the point
where we took no prisoners. It wasn’t a written order, but a way to survive.
No one should take a chance to take a guy prisoner who might try to kill
him.
I don’t
know how you can defend this attitude. I feel the military in Japan fooled
their people. Somehow they convinced their soldiers that their lives belonged
to someone else. So the Japanese soldier was tough and smart, but at the
end he was finished and could only blow himself up.
I have
emphasized accounts of early engagements because they helped create a kind
of battlefield culture in the Pacific. All of the events related above
became lessons learned for soldiers trained later in the war. More important,
each account was passed on from soldier to soldier. The rumor mill works
overtime in war, and no doubt bogus or exaggerated accounts of Japanese
treachery appeared in large numbers. As the war continued and more American
soldiers received a good dose of political indoctrination and propaganda,
the audience was well inclined to believe the worst of the stories. Sadly,
as the war progressed, and the bloom of Japan’s victories began to wane,
it is very possible that more Imperial troops might have been willing to
surrender had the situation allowed it. Unfortunately, these servants of
the emperor met Americans increasingly convinced that the Japanese would
not surrender, or, if surrender was offered, it might prove a ruse. The
Americans believed that it was not worth the risk to take prisoners. Japanese
propaganda told soldiers and civilians alike that the Americans were butchers
who would murder anyone tempted to capitulate. By 1944, to a sad degree,
both sides were preaching a kind of truth.
Despite
the vile circumstances, the American Army made systematic efforts to take
Japanese prisoners throughout the war. Each division had Japanese-American
intelligence personnel who had, among their duties, the job of attempting
to get Japanese soldiers to leave their entrenchments and surrender. In
some cases they succeeded. Yet these men, despite brave work and a largely
unheralded record, were swimming upstream against a powerful current of
mutual bloodletting.
A tragedy
took place in the South Pacific that stemmed largely from the grotesque
manipulation of the Japanese people by Japan’s military government. By
successfully convincing their soldiers to find meaning in oblivion, and
to accept the frightening idea that spiritual purification comes through
purposeful death, the Japanese government created the psychological framework
for total war. I think that it is very possible that the wellearned image
of Japan as a fanatical, even suicidal foe, had a profound influence on
the extremely brutal measures taken by the United States to end the Pacific
War. In an era when the United States and other nations of the world may
again be facing an enemy propounding a cult of death, this is a sobering
thought.
Eric
Bergerud is professor of history at Lincoln University. He is completing
the last volume of a trilogy on World War II in the Pacific.
Join
the Historical Society and subscribe to Historically Speaking
The
Japanese in Brazil:
From Immigration to Dekassegui
by
Mieko Nishida
In
1980 the first Japanese-Brazilian woman film director, Tizuka
Yamasaki, made her international debut with Gaijin: oscaminhosdaliberdade.
Gaijin, a Japanese word literally translated as “foreigner” that
usually refers to white foreigners in Japan,
is the term with which Japanese- Brazilians have always referred to non-Japanese-Brazilians.
But as Yamasaki says, “It is indeed the Japanese who have been gaijin
in Brazil.
They had been abandoned by their native country, and have been alienated
in a new land.” Yamasaki, born in 1949 to an Issei
father and a Nisei mother, based this movie on the life of her maternal
grandmother. It captures various aspects of Japanese immigration to Brazil,
including the hardships of coffee plantation life.
It
is widely believed that Japanese-Brazilians have enjoyed success as a model
minority group in urban Brazil. Dedicated to educating their children,
highly motivated immigrant parents left the coffee plantations for the
city. Brazilian-born children attended the University of São Paulo
and became lawyers, medical doctors, and dentists. But is this picture
of the Japanese-Brazilian elite a social reality or a cultural myth?
The
state of São
Paulo
has the second largest concentration of persons of Japanese descent outside Japan
(Hawaii
is first). In Brazil
as a whole, the total number of persons of Japanese descent amounts to
some 1.3 million, almost 1% of the entire Brazilian population of 155 million.
Japanese
immigration to Brazil
was a response to the great demand for labor on the coffee plantations
in the Southeast following the abolition of slavery in 1888. At first,
European immigrants filled this demand. But the flow of workers from Europe
came to a halt by the beginning of the 20th century. Japanese immigration
to Brazil
began in 1908 and, after the enactment of anti-Asian immigration laws during
in the 1920s in the United
States,
quickly rose to more than 100,000 by the 1930s.
During
the late 1930s and early 1940s, Japanese-Brazilians were faced with two
drastic changes. One was Brazilian nationalism; the other, Japan’s involvement
in World War II. Brazil’s President Getulio Vargas strictly enforced the
country’s immigration laws and its assimilation policy. Schools taught
by aliens and in foreign languages were suppressed in 1938; almost all
Japanese schools, numbering about 600 at the time, shut down. Beginning
in 1940, foreign language newspapers were subjected to censorship. After
Pearl Harbor, Japanese immigrants were no longer permitted to publish newspapers,
even in Portuguese. Furthermore, Brazil severed diplomatic relations with
Japan in January 1942, and Japanese-Brazilians lost all freedom to travel
inside the country. The Brazilian government prohibited Japanese immigration
in 1943, although immigration was allowed to resume a decade later.
The
urbanization of the Japanese- Brazilians did not take place until after
Japan’s defeat. By then, most had given up hope of returning to Japan with
a substantial fortune; instead they decided to settle in Brazil with their
Brazilian-born children. During the 1950s, they began to move to major
cities on a large scale. Rapid urbanization did not mean that Japanese-Brazilians
moved up the social ladder. On the contrary, most abandoned positions as
landowners and independent farmers to work in small family businesses—laundries,
vegetable stands, grocery stores, beauty salons, and craft shops. Without
sufficient capital to invest in these businesses, Japanese-Brazilian entrepreneurs
relied heavily on the unpaid labor of family members, particularly Nisei
children. Thus a typical Japanese-Brazilian family strategy was created:
older children worked for the family business, while younger children,
particularly sons, were sent to college. Two classes of Japanese- Brazilians
emerged: college-educated, assimilated, white-collar professionals; and
members of the working-class who continued both to speak Japanese and preserve
Japanese values and customs.
In
the mid-1980s many South Americans of Japanese descent—not only from Brazil
but also Peru, Argentina, and Paraguay—began emigrating to Japan. In the
case of Japanese-Brazilians there had always been a cultural factor motivating
them to move “back” to Japan. Despite the image of Japanese-Brazilians
as a successful urban middle class, they had been consistently exposed
to the larger society’s pejorative racial and ethnic biases under the label
of “Japanese” (japonês/japonêsa), which often limited their
social advancement. In return, Japanese-Brazilians defended themselves
by romanticizing Japan as their homeland, while looking down at Brazil
as a thirdclass nation (santou-koku).
In
both Brazil and Japan the new immigrants are known as dekassegui, derived
from the Japanese word dekasegi, the term for seasonal labor migrations
from rural Japan to major cities such as Tokyo. In 1990 the Japanese government
offered persons of Japanese descent permanent residency in Japan, and the
number of dekassegui workers from Brazil quickly rose to some 250,000 by
1997. But during the last few years of the 20th century, in the face of
a major recession in Japan, this number dropped drastically. Now, many
former dekassegui have moved back to Brazil.
Dekassegui
seems to have provoked a strong sense of shame among Japanese- Brazilians.
Many Japanese-Brazilians try to hide the fact that their family members
had been to work in Japan. Some dekassegui returnees say that they went
to Japan not for work but just to look around Japan and have a good time.
In
fact, about half of the “Japanese” migrant workers from Brazil, both men
and women, worked on assembly lines for minimum wages and without the mandatory
health insurance required by the Japanese government. Others worked in
the construction industry, as well as in the service industry, including
hotels, hospitals, and golf courses. The Japanese did not accept them as
equals. For the indigenous Japanese,
Japanese-Brazilians
were foreigners, manual laborers, and also descendents of the “undesirables”
who had emigrated from Japan many years before. As an old Japanese saying
goes, “emigrants are the abandoned (Imin wa kimin).” Japanese-Brazilians
also tended to separate themselves from the Japanese to a considerable
degree. They kept in touch with their “fellow Brazilians” and formed ethnic
Brazilian communities in various small and middle-sized industrial cities
in Japan, such as Hamamatsu of Shizuoka prefecture, Toyota of Aichi prefecture,
and Oizumi of Gunma prefecture. In the case of Oizumi City, commonly called
Brajiru Taun (Brazil Town), the non-Japanese population constitutes only
12% of the whole. Within such enclaves, Japanese-Brazilians speak Portuguese,
publish Portuguese newspapers, and operate their Brazilian restaurants,
nightclubs, and samba bars.
On
the other side of the Pacific, the impact of dekassegui on Japanese-Brazilians
in São Paulo has been notable in various ways. Liberdade, the Japanese
district of the city of São Paulo, has virtually disappeared.Most
of the Japanese stores and restaurants have been sold to Chinese and newly
arrived Korean immigrant merchants, who continue to use the stores’ original
Japanese names and employ Japanese-Brazilian clerks to maintain their Japanese
identity for the sake of business. Many old and well-known Japanese hotels
and travel agencies, which used to cater to Japanese and Japanese- Brazilian
tourists, have gone bankrupt.
They
say that almost every Japanese- Brazilian family has at least one member
or relative who works or has worked in Japan. It is also commonly said
that if a Japanese- Brazilian household earns the average wage of a Brazilian
middle-class family, approximately $1,000 (U.S.) a month, none of the family
members would migrate to Japan for dekassegui. This suggests that the image
of Japanese-Brazilians as a successful urban middle class may be more myth
than reality. Furthermore, Japanese Brazilians’ wellknown enthusiasm for
higher education and urban professional occupations has been declining
notably. Many Japanese-Brazilians with college degrees have chosen to work
as manual laborers in Japan. The youth from families with relatively limited
means have also opted for dekassegui in Japan instead of college in Brazil.
Dekassegui
has destroyed much of the myth of the collective success of Japanese- Brazilians
and revealed the reality of nonelite Japanese-Brazilian life. By returning
to their “homeland,” the “Japanese” in Brazil have found themselves becoming
gaijin again—as Brazilians in Japan.
Mieko
Nishida is assistant professor of history at Hartwick College. She is the
author of Slavery and Identity: Ethnicity, Gender, and Race in Salvador,
Brazil, 1808–1888 (Indiana University Press, forthcoming).
Join
the Historical Society and subscribe to Historically Speaking
Honor’s
History across the Academy
by Bertram
Wyatt-Brown
The
concept of honor provides the ethical foundation for an array of discriminations
about race, class, gender, sexual orientation, ethnicity, religious conviction,
and age.[i]
For centuries those sources of pride and prejudice have subjected countless
numbers to rank injustice while elevating arbiters of custom to righteous
ascendancy. It goes without saying that there are many definitions for
the word “honor.” Yet the work of Julian Pitt-Rivers, Pierre Bourdieu,
and other anthropologists initiated in the 1960s the new understanding
of honor as a means of organizing society and its values. In accordance
with that outlook, the remarks that follow stress the darker and more troubling
aspects of the code. Also, they clearly contrast a social science orientation
against popular meanings of honor, no less valid but not pertinent to this
discussion. To a degree, other academic disciplines have pursued the ethical
construction with considerable effect. American historians, however, have
been slow to recognize the significance of this nearly universal way of
ordering social arrangements.
In
the West, honor and its code of behavior long predates Christianity. At
the same time, honor flourishes feverishly in our own day in other parts
of the globe, particularly in the Middle
East
and South
Asia.
In spite of variations from one locale to another, diverse emphases from
nation to nation, and changes over time, honor has retained a remarkable
consistency. Historians in the United
States
should welcome its explanatory power.
By
now, the broader apprehension of honor may be familiar enough to make unnecessary
a lengthy explanation. Suffice it to say, the scheme is based largely upon
the nature of human existence—the accidents of birth and rearing no matter
how outrageous, unjust, or irrelevant they might seem to the modern mind.
Among them are: the primacy of male over female, senior over junior, inherited
over acquired wealth. A noted blood lineage is favored over obscure or
disreputable origin. In some societies an allegedly superior claim to religious
truth separates the body politic. A freeborn condition over enslavement
marks the honorable from the shamed. Other distinctions come to mind but
are too numerous to mention. Psychologically, honor requires that self-identity
not be determined by selfgenerated factors. Rather, assessment of worth
relies upon a watching public. The individual then must accept that community
judgment (good or bad) as valid and adopt it as part of the inner self.
Honor thrives in the absence of law. Yet it often can exist as an alternative
to law or provide the basis of law, as in the case of the Muslim sharía.
Sometimes it takes the form of far grimmer mob actions or lynchings than,
for instance, the “shivaree” in the musical Oklahoma!
For
most cultures, a warrior spirit elevates honor to the zenith of ideals.
The interdependence of men at arms upon the reliability of their comrades
and the necessity of an ironclad discipline make honor a vital part of
military culture. Essential though it may be in that domain, the same tenets
of authority can play a deplorable role in other situations. For instance,
the sociologist Orlando Patterson in Slavery and Social Death (1982)
has studied nearly two hundred slave societies, with their inhuman proclivities
and determination to obliterate autonomy. All of them, he concludes, were
honor societies. Yet, not all honor societies, he observes, were based
on bondage. With or without the regimen of slavery, devotees of honor may
mistake gentleness for weakness, kindness for effeminacy. Of course, magnanimity,
condescension, and noblesse belong among traditions of honor. Yet these
alleged virtues must appear to be uncoerced: the giver sets the terms,
not the recipient. In response to outside criticism, men sensitive of their
honor insist on violent repudiation. They are likely to lambaste reluctant
or prudent skeptics with charges of cowardly, unforgivable acquiescence.
Compromise and the eagerness to negotiate rarely rank high in honor-shame
societies. On that unhappy basis, nations may be pressured into making
war. Aggrieved or subjected peoples may be swiftly led into rebellion,
even to the point of self-destruction. In earlier days, gentlemen might
have felt compelled by the opinion of peers to duel, while lesser folk
were goaded into eye-gouging, knife-throwing, or ambushing an enemy with
firearms.
To
demonstrate the uses to which this paradigm has been put, we might begin
with the ancient world. In biblical studies, the explorations of David
Arthur de Silva, Krister Stendahl, Bruce Malina, and others have clearly
demonstrated the significance of the honor code in the cultures that produced
the Old and New Testaments, the letters of Paul, and other scriptural sources.
The teachings of Christ rejected the tenets of honor, with few exceptions.
George Fenwick Jones’s rephrasing of the Sermon on the Mount put the honor
code neatly years ago: “Blessed are the rich, for they possess the earth
and its glory . . . . Blessed are they who wreak vengeance, for they shall
be offended no more, and they shall have honor and glory all the days of
their life and eternal fame in ages to come” (Honor in German Literature
[1959], 40).
The
sociological approach now abroad in the theological field places Christ’s
message in the context of contemporary conventions. The precepts challenged
the rigidities of hierarchical codes, hot demands for retribution, obsessive
worship of power however well disguised as virtue, and ascriptive modes
of behaving. Was it truly honorable to turn the other cheek, give all one’s
possessions to the poor, suffer little children to come forward, bow down
in awe only to God, consort with tax collectors, servants, and lepers on
terms of equality, and offer grace to prostitutes, and forgiveness to criminals?
Hardly. Yet, as these authors reveal, for the Christian message to reach
beyond a tiny sect there had to be concessions to existing arrangements.
Paul was a master of the strategy. By accepting bondage, male predominance,
and other concessions, Christ’s apostle provided a bridge between the pagan
code and the new dispensation. Of course, throughout Christian history
rejections of honorable practices (bows to authority and denunciation of
vainglory, for example) have almost invariably accompanied pietist and
restorationist movements.
The
application of honor studies to the ancient Near East helps to explain
how Christianity itself has managed through the ages to make peace with
the community’s moral codes. Yet the chief historical scholarship involving
the ethic lies in the realm of European history —from ancient to modern,
from the Baltic to the Mediterranean. Among classicists, who can deny the
acute perceptiveness of Moses I. Finley’s The World of Odysseus
(1954)? In addition, Kenneth J. Dover, Paul Rahe, and many others have
treated the relationship of honor, slavery, and timocratic arrangements
in their work on the ancient Greeks. (In addition, anthropologists and
historians of modern Greece have also concentrated on ethical values.)
For
generations, scholars of medieval Europe concentrated on renditions of
chivalry and the rituals and institutions of knighthood. New work that
embraces more than just the warrior class, however, has appeared, most
particularly with regard to Scandinavian, Frankish, and Celtic cultures.
Patrick J. Geary, Peter Brown, and William Ian Miller offer new approaches
to the early medieval period. Geary’s study of relic-stealing, Furta
Sacra: Thefts of Relics in the Central Middle Ages (1978), demonstrates
the talismanic honor of possessing saints’ bones, whether bought or stolen.
Especially noteworthy from the perspective of honor is Miller’s Bloodtaking
and Peacemaking: Feud, Law, and Society in Saga Iceland (1990), a stunning
interweaving of religious ideology, violence, and honor.
The
interpretations of European cultures by Johann Huizinga and Georges Duby
and a revived interest in the studies of Norbert Elias laid the groundwork
for more recent scholars of the 12th to 15th centuries. Among other crudities
of medieval life that historians have examined, scarifications were not
uncommon. In a more palatable vein, the work of such medievalists and historically-minded
anthropologists as Emmanuel Le Roy Ladurie and others link medieval honor
and religious thought and practice (see John G. Peristiany and Julian Pitt-Rivers,
eds., Honor and Grace in Anthropology [1992]). Research on the moral
values of the 15th to 17th centuries, however, has flourished even more
than in the earlier era, thanks in large measure to Natalie Zemon Davis.
Worth special mention are her insights into youthful mob action (Society
and Culture in Early Modern France: Eight Essays [1975] and “The Reasons
of Misrule: Youth Groups and Charivaris in Sixteenth-Century France,” Past
and Present 50 [February 1971]). Likewise, E. P. Thompson’s brilliant essay,
“‘Rough Music:’ Le Charivari Anglais,” Annales E.S.C. 27 (March–April 1972),
explains the ritual of a skimmington used to punish offenders who threatened
local moral order.
Turning
to the era of early modernization, a special concentration has developed
on the custom of dueling abroad. Robert A. Nye (Masculinity and Male
Codes of Honor in Modern France [1993]) and Kevin McAleer (Dueling:
The Cult of Honor in Fin-de-Siècle Germany [1994]), in particular,
have illuminated the relationship between such often deadly encounters
and class exclusivity—the effort to shed bourgeois status and enter the
aristocratic ranks. Moreover, works by Joanne Freeman (Affairs of Honor:
National Politics in the New Republic [2001]) and this author (The
Shaping of Southern Culture: Honor, Grace, and War, 1760s–1880s [2001])
provide fresh material on dueling in a democratic American context. They
show how patron-client relations—the need for patrons to show valor or
lose their indispensable, younger political clients— played a major role
in the famous encounters of Alexander Hamilton and Aaron Burr in the North
and Andrew Jackson and his opponent, Charles Dickinson, in the Southwest.
Despite popular fascination with duels, honor and personal violence furnished
a lethal combination among the other classes of society, too, both here
and abroad. Anthropologist Frank Henderson Stewart notes in Honor
(1994) that peasants in Europe upheld a fierce loyalty to the code. Likewise,
the Old South’s yeomanry and poor were hardly immune from the dictates
of honor. Nor did they hesitate to shame alleged offenders—sometimes even
those belonging to the better classes. In 1985 Elliot J. Gorn demonstrated
the backcountry’s regard for matters of honor in “‘Gouge and Bite, Pull
Hair and Scratch,’” American Historical Review 90 (February 1985).
Using the theme of honor, David Courtwright on a broader canvas has depicted
the persistence of male violence from early American history to the anarchy
of inner-city life. In more recent times, honor surfaces in street gang
warfare and territorial claims, as Ruth Horowitz points out (Honor and
the American Dream: Culture and Identity in a Chicano Community [1983]).
And of course, most Americans would recognize, if only from The Sopranos,
familial honor in the American Mafia and its original Sicilian habitat.
Honor
and shame—as well as shamelessness and violence—ordinarily are associated
with small group or close community situations such as the peasant villages
of Greece or Sicily. Nonetheless, nations, regions, and large ethnic groups
have also resorted to extremities in avenging perceived wrongs—all in the
name of a collective honor. One of the most interesting studies in this
vein is the essay by Avner Offer, “Going to War in 1914: A Matter of Honor?”
Politics and Society 23 (June 1995). When the Gavrilo Princip shot
Franz Ferdinand at Sarajevo, he had avenged the Austrian prince’s insulting
presence on a day celebrating Serbian Nationalism. In turn, Offer argues,
the assassination represented an affront to the monarchy. Obsessed with
the notion of honor, Kaiser Wilhelm II induced his royal neighbor to rattle
sabers unless Serbia demonstrated abject contriteness. The series of ultimatums,
each duly refused, was supposed to be most honorable. Yet, like a duel—as
General Helmuth von Moltke feared—it was suicidal, and not just for Germany
alone.
Years
before, Southern secessionists had responded similarly to threatening events.
Eager for war in 1860, the slaveholding fireeaters denounced their Southern
critics as base submissionists and proclaimed themselves chivalrous protectors
of home and property, human and inanimate. They demanded immediate vengeance
for the insult of a “Black Republican’s” presidential victory. The resulting
struggle destroyed the very institution, slavery, that the prescriptions
of honor were supposed to uphold.
Finally,
we turn to the world today to discover that the old ethical ways not only
persist but drive the impassioned forces of terror and war. In the West,
the compulsions that drove men to duel, bite, or kill—or to announce pride
in white skins and brave manhood —no longer carry their former moral sanction.
Elsewhere, though, honor thrives as a deadly, anti-modern force. In the
minds of millions it is currently linked to the cause of Islam, whether
justifiably or not. Historians— along with ordinary citizens—might learn
much from a deeper understanding of so potent an ethical design. Honor
feeds on desperation, all-consuming hatred, poverty, and a burning sense
of humiliation in the face of an enemy’s prosperity and military might.
To know the history of the emotionally drenched modes of honor may not
prevent recriminations, acts of brutality, and force of arms. But by reducing
the corrupting dread of shame and repressed self-loathing, roads to peace
may be found. If not, it will take war to immobilize those unable to grasp
anything but the superiority of martial strength. The Nazis, dedicated
to Blut und Ehre, blood and honor, discovered that unwelcome truth.
Let us hope it does not come to that again.
Bertram
Wyatt-Brown is the Richard J. Milbauer Professor of U.S. History at the
University of Florida. His most recent book is The Shaping of Southern
Culture: Honor, Grace, and War, 1760s–1880s (University of North Carolina
Press, 2001).
[i]I
wish to thank Anne Wyatt-Brown and Randall Stephens for their editorial
help. More extensive bibliographical citations are available on my web
site http://www.clas.ufl.edu/users/bwyattb/, under
honorhistory.
Join
the Historical Society and subscribe to Historically Speaking
The
American West and Its Historians
by Richard
W. Etulain
Academic
historians have been studying the frontier and the American West for more
than a century. When Wisconsin
historian Frederick Jackson Turner told his colleagues in 1893 that the
frontier was the most significant feature of the American past, he fired
a historiographical shot heard around the English-speaking world
for decades. Interest in frontier and western history remained high until
the 1960s and then waned in the 1970s and early 1980s. But since the late
1980s, western American history has regained its earlier status as a field
alive with activity and controversy.
With
some ups and downs in popularity and minor modifications, the ideas in
Turner’s classic essay “The Significance of the Frontier in American History”
held sway until the late 1960s. Along the way, other historians supported
the frontier story or followed Turner’s second idea, the significance of
sections (regions), which he advanced later in his career. In the 1920s
and 1930s, one advocate of both frontier and regional interpretations,
Herbert Eugene Bolton, urged all American historians to pay more attention
to Spanish influences on American culture. Texas
historian Walter Prescott Webb fired the imaginations of other regionalists—and
more recently environmental historians—with his provocative book, The
Great Plains (1933). In addition, Kansan James Malin,
in the 1930s and 1940s, urged historians to pay more attention to subregional
and ecological patterns even as he systematically utilized statistical
data well before other scholars. Later, Ray Allen Billington
produced his magisterial overview of frontier history, Westward Expansion: A
History of the American Frontier (1949), the most widely adopted western
text from mid-century until the 1970s.
In
the 1950s, two other scholars provided new angles of vision through which
to view the American West. American Studies specialist Henry Nash Smith
taught historians to think carefully and analytically about myths and symbols
cohering around the West in his classic work, Virgin Land: The
American West as Myth and Symbol (1950). He also urged western specialists
to take more seriously works of mass culture such as dime novels, travel
narratives, and government documents. Concurrently, western historian Earl
Pomeroy, revising Turner’s thesis, argued that continuities of eastern
social, cultural, political, and economic thought and experience bulked
at least as large as frontier experiences in shaping the American West.
In his books and essays, Pomeroy called for a reorientation of western
history, urging historians to place more emphasis on persisting European
and cis-Mississippi influences in the trans-Mississippi
West.
These
interpretations held sway until the end of the 1960s. But that yeasty decade
markedly reshaped western historiography from the early 1970s onward. These
transformations signaled the end of the Turnerian
dominance and the rise of new emphases on racial/ethnic, gender, and environmental
themes in western historical writing.
If Turnerians seemed to disappear after 1970, specialists in ethnic history
surfaced in increasing numbers. Some came by way of other fields. Robert
Utley and Francis Paul Prucha, authors of earlier frontier military histories,
produced notable works on government Indian policies and white contact
with Indians. Others, such as Peter Iverson and Albert Hurtado, utilizing
the findings of anthropologists and ethnologists, turned out studies of
Native Americans informed by insights novel to most historical writings
about the West. Simultaneously, Sandra Myers, Julie Roy Jeffrey, and Glenda
Riley published the first overviews of women’s experiences on the trans-Mississippi
frontier.
The
1970s also saw the first crop of environmental histories of the American
West. From Turner onward, and especially in the writings of Webb and Malin,
the expansive and varied settings of the West had encouraged studies of
the frontier environment. But in the 1970s and 1980s path-breaking new
books on the western environment appeared, with those by Richard White,
Donald Worster, William Cronon, and Patricia Nelson Limerick gaining widespread
attention. The theoretical and ecological sophistication of these studies
clearly indicated how much western historians were in the forefront of
American environmental historiography.
By
the mid-1980s two trends were clear in historical writing about the American
West. Hardly more than a Turnerian or two was still alive, and several
new thematic emphases were enriching the field. While these new interpretations
were invigorating western historiography, revealingly, no one yet called
for a fresh synthesis to replace previous overviews.
But
within the next decade interpretations of the West dramatically changed.
In the brief period from 1987 to 1991, new books redefined the field and
pioneered what became known as the New Western history. The key volume
in this movement was Patricia Nelson Limerick’s The Legacy of Conquest:
The Unbroken Past of the American West (1987). Also of notable importance
was Richard White’s mammoth text, “It’s Your Misfortune and None of My
Own:” A New History of the American West (1991). Along with these two innovative
works, the writings of Donald Worster and William Cronon (the latter of
whom repeatedly asked not be included among the New Western historians),
and other important volumes by Peggy Pascoe, William G. Robbins, and Ramon
Gutiérrez helped to launch the New Western history by the early
1990s.
The
new movement in western historical writing, linked particularly to Limerick,
White, et al., defined itself through its subject matter, point of view,
and tone. Limerick’s lively and widely circulated Legacy of Conquest encapsulated
most of the major ingredients of the New Western history. Taking the continuity
of western history (“the unbroken past of the American West”) as one of
its themes, Limerick’s volume stressed the persisting racial and religious
discrimination, economic selfishness, and environmental destruction of
many Euro-Americans residing in the West. For Limerick, Turner and many
previous western historians too often stressed “triumphal” subjects. That
is, these earlier writers praised the achievements of pioneers without
seeing the darker sides of their actions, including mistreatment of non-white
competitors. Limerick’s book, on the other hand, overflows with criticism
of the western past. Limerick says her interpretations are “realistic,”
but her critics assert that her views are decidedly pessimistic.
The
New Western historians traveled paths both similar to and different from
those of many other American historians. Like many U.S. social historians
of the 1970s and 1980s, the New Westerners scrutinized previously neglected
subjects. For instance, Richard White examined overlooked environmental
experiences shaping Island County in the state of Washington, the “roots
of dependency” entangling Native American groups, and the “middle ground”
cultures emerging from Indian and European conflicts and combinations in
the Great Lakes region. Similarly, Donald Worster paid close attention
to class and cultural influences on environmental and political decisions.
Other
trails the New Western historians followed were more particular to western
historical writing. Breaking from previous interpretations, the New Westerners
criticized the ideas of Turner, Billington, and other frontier historians
or left them out of their histories. White’s valuable, interpretive overview,
for example, omitted mention of both Turner and the frontier; and Limerick
frequently noted the large limitations of Turner’s frontier thesis. Ethnic,
racial, and class competitions; the destructive power of capitalism; and
maltreatment of the environment—these became dominant subjects in New Western
histories. And in these histories, conflict took center stage.
From
the late 1980s until the mid-1990s, the New Western history gained notoriety
and increasing acceptance among teachers and students of the American West.
Then, at the end of the 1990s, still another approach to the West began
to surface. Gradually, a few westerners, perhaps dissatisfied with the
strictures and tone of the New Western history, perhaps disagreeing with
its emphases, began to produce a fresh crop of books. These studies depict
a West rife with cultural contacts leading to competition and conflict
but also a region sometimes encouraging cultural conversations and even
a few combinations. Quite simply, another trend in western historiography
is boiling to the surface, one moving beyond conflict toward additional
complexity.
The
most important book in this new complex American West is Elliott West’s
sparkling volume, The Contested Plains: Indians, Goldseekers, and the Rush
to Colorado (1998). In this smoothly written book, the Arkansas historian
and president of the Western History Association moves well beyond the
familiar conflict theme of the New Western history to a much more complex
West. To be sure, Professor West’s much-lauded study includes a good deal
of conflict. Native Americans contest with demanding environments, with
one another, and with inrushing Euro-Americans. American pioneers from
the East similarly clash with encroaching landscapes, Indians, and one
another. And those competitions continually change as they react to cultural
shifts and varying pressures.
But
Elliott West moves beyond these clashes to new complexities. For example,
unlike most of the New Western historians, West emphasizes how much plains
Indians were already in trouble with an environment that was rapidly being
depleted before whites invaded the area. He also demonstrates how often
the contesting races intermingled, especially when American men married
Native women. These mixed marriages and their offspring, and the resulting
sociocultural dilemmas they faced, symbolize the contesting and complicated
plains peoples—American Indians, Euro-Americans, and mixed-blood Americans
—at the center of Elliott West’s complex story.
Walter
Nugent’s thorough overview of immigration movements to and in the American
West in his remarkable book, Into the West: The Story of Its People (1999),
provides another noteworthy example of a more complex western historiography
surfacing in the past few years. Nugent does not overlook racism and animosities
that often divided westerners and frequently led to violent clashes. His
thorough demographic overview of western migrations vibrates with numerous
competing, interacting groups of people. But his valuable story is not
limited to conflicts. He also shows how the Pueblo Indians and the Spanish,
for example, merged to produce Hispanic peoples; how diverse other ethnic
and racial groups mingled and mixed to spawn new kinds of combinations
and communities. Throughout Into the West, Nugent furnishes numerous examples
of social and cultural contacts that often moved beyond initial, contesting
stages to ones of agreement and even consensus.
Other
western historians, as well as novelists and filmmakers, utilize this complex
story of the American West. For example, the muchpraised books of David
J. Weber, the leading historian of the Spanish Borderlands, are replete
with cultural clashes and combinations. Similar complexities characterize
the valuable and smoothly written studies of James P. Ronda, noted authority
on Lewis and Clark and other agents of empire. The most prolific author
on western women, Glenda Riley, spins equally complex and rewarding stories
of women’s experiences on the frontier. Until his recent death Gerald D.
Nash pioneered a new field with a half-dozen multifaceted studies of the
20th-century West. Several western novels by Wallace Stegner, Larry McMurtry,
Cormac McCarthy, Leslie Silko, Barbara Kingsolver, and Ivan Doig, among
many others, feature complex western stories, as do such recent western
films as Geronimo (1994) and Lone Star (1996).
These
new complex stories of the American West, emerging since the mid-1990s,
now compete with the New Western historians’ conflict paradigm for dominance
in western historical writing. In the view of a growing number of western
specialists, the New Westerners failed to provide sufficiently complicated
views of the western past. Just as the earlier consensus interpretations
of the 1950s owed much to the Eisenhower years and the New Western history
to the post-1960s, so the social and cultural complexities of the late
1990s have encouraged a more complete view of the West. That an alternative
view of the American West has cycled into view in the past few years is
not surprising. Shifting historical interpretations of the West have always
been revealing evidences of socio-cultural transformations in the region.
Richard
W. Etulain, emeritus professor of history at the University of New Mexico,
was president of the Western Historical Association, 1998–1999. His most
recent book is César Chávez: A Brief Biography (Palgrave,
2002).
Join
the Historical Society and subscribe to Historically Speaking
America
and Europe: Before and after September
11, 2001
by
John L. Harper
Stripped
to its bare essentials, the post-World War II transatlantic relationship
amounts to an American protectorate over Europe,
invited and to a degree shaped by the Europeans themselves. The protectorate
has served a double purpose: promoting peace and harmony among the European
states as well as counterbalancing Russian power. Commentators and policymakers
have “cried wolf” many times about these arrangements, especially since
the fall of the Berlin Wall. According to Professor Stephen Walt, writing
in 1999:
deep
structural forces . . . are already beginning to pull Europe and America
apart. Instead of becoming the core of an expanding security community,
united by liberal values, free markets, and strong international institutions,
the transatlantic partnership that fought and won the Cold War is already
showing unmistakable signs of strain. No matter how many new states join
NATO, and no matter how many solemn reaffirmations emerge from the endless
parade of NATO summits, the high-water mark of transatlantic security cooperation
is past.
Unfortunately
for Walt's analysis, several months after its appearance the U.S.
led NATO to victory in Kosovo. The Australian analyst Coral Bell
compared the war to a “bolt of lightening” revealing Europe's
basic weaknesses and America's
likely predominance for the next forty years.
Whether
or not one agrees with Bell
on Kosovo, it is striking that the U.S.
protectorate has not been called into serious question since 1989. This
has to do with several assumptions, widely (though not universally) held
in Washington and European capitals. First, the U.S.
acting through NATO continues to be Europe's
indispensable organizer and “pacifier.” Left to their own devices (as initially
in the Balkans), the European Union (EU) states are unlikely to form an
effective coalition, and national rivalries could reemerge. Despite lip
service to the notion of the “democratic peace” (liberal democracies don't
fight each other),Washington
believes that, in the final analysis, peace rests not on democracy but
on hierarchy. Second, a largely unspoken assumption, though it is clearly
evident in repeated U.S. warnings against an EU “caucus” or bloc within
NATO, is that the EU might actually coalesce to the point that it could
seriously challenge the U.S., for example, on policy toward the Middle
East and Russia. Gone are the days when a U.S.
president could wish (as Eisenhower did) that Europe
would become “a third great power bloc.” For the U.S.
today, NATO constitutes a ceiling beyond which purely European integration
cannot go. Third, most European states prefer that the U.S. remain the
leading power on the continent as an insurance policy against Russia, and
trust the U.S. as pacifier-protector more than the putative alternatives,
Germany or the EU.
Equally
striking is the convergence in the past several years of a set of controversies
with the potential to provoke serious transatlantic disagreement. The Bush
administration's refusal to support the Kyoto Protocol, the International
Criminal Court, and the Comprehensive Test Ban Treaty (all American initiatives)
has provoked European charges of unilateralism and hypocrisy. Such questions
will not bring a breakdown in relations, but fuel a crisis of legitimacy
of U.S. leadership in Europe and the development of a collective political
will on the part of the EU. Trade disputes, meanwhile, will persist (even
if history suggests they are manageable), and the common currency will
have strategic implications, tending to promote European cohesion and competition
with the U.S.
Both
sides have assumed that the European Security and Defense Policy (ESDP),
with its aim to create a 60,000 man rapid reaction force, will proceed.
The Europeans have avoided provoking the U.S. by insisting on a formal
caucus in NATO, and Washington has not been so foolish as to try to cripple
ESDP. But the Americans are clearly irritated by European pretensions and
fear a weakening of NATO. Whether it works or doesn't work, ESDP could
spell trouble down the line. Washington has opposed “Project Galileo,”
a plan to free Europe from dependency on the Pentagon's Global Positioning
System (GPS) by building its own satellite-based GPS. Europeans are reminded
of de Gaulle's observation that monopoly is always the best of all possible
worlds from the point of view of the monopolist.
Candidates
for the next round of NATO enlargement will be named at the end of 2002,
and the Bush administration has pressed for inclusion of the Baltic states.
The assumption has been that Russia can be persuaded to accept a status
similar to that of Turkey after 1918, and to prefer to see bordering countries
under the tutelage of an historic rival rather than itself. If, however
(the argument goes), Russia objects, it must have hostile intentions. This
makes it all the more urgent to prove that Russia cannot draw a “red line”
excluding certain states from NATO. The Europeans are more concerned about
avoiding Russia's isolation and humiliation. And how, they ask, can the
U.S. claim to act as a great power around the globe while denying Russia
special influence on its own doorstep?
The
Bush administration has stated that it will create a Missile Defense (MD)
to deal with a “limited missile attack.” But few believe that a limited
MD would remain limited, or that its real purpose would be to shield the
U.S. from unprovoked attacks. In Washington MD has been commonly spoken
of as a cover for the deployment of U.S. conventional and/or nuclear weapons
in a war with China.
In
Europe there has been far less concern about “rogue states” and more about
Russia's and China's reaction to MD. There is also considerable skepticism
about the effectiveness of America's proposed Maginot Line in the sky.
Europe has leverage: key radar installations must be built there and an
MD which protected the U.S. but left U.S. forces and allied territory vulnerable
would make little sense. But it is hard to see the shape of a U.S.-European
compromise. The U.S. will not settle for the “boost-phase” option, deployed
near supposed rogue states, which would be most acceptable to the Europeans
and the Russians.
The
Balkans have been the scene of the bitterest recent disagreements, and
illustrate the ambivalent attitudes of the two sides. The U.S. professed
to favor European initiative in the 1990s yet belittled European efforts
and displayed a degree of Schadenfreude when they failed. The Europeans
have seen the Balkans as proving ground of their capacity to act autonomously
and felt prodded after Kosovo to accelerate ESDP. Yet they have continued
to assume that U.S. participation is essential, and fiercely resent the
prospect (favored by some in the Bush administration) that the U.S. will
remove its forces, leaving European soldiers to do the dirty work.
Finally,
conflicts in other parts of the world pose risks to transatlantic relations,
as they did during the Cold War. In a U.S.-Chinese conflict over Taiwan,
for example, Washington would expect its European allies to take part in
(at least) an economic embargo of China. But it is unlikely that the Europeans
would see their vital interests at stake or be prepared to make serious
sacrifices on behalf of what many would see as a dubious war for Taiwan.
Such a war could lead to the withdrawal of U.S. forces from Europe out
of military necessity, political pique, or both.
The
ways in which the September 11 attacks will affect transatlantic relations
are by no means entirely clear, but the initial impact was largely positive.
In Europe September 11 produced genuine horror and feelings of solidarity,
even if Europeans were quicker than Americans to draw the obvious connection
between the attacks and America's hegemonic role in the Persian Gulf. The
patient, relatively successful conduct of the Afghanistan phase of the
war indicated that Washington had rejected the kind of unilateral crusade
that would not be supported in Europe and recognized that it needed the
help of its allies to break up the Al Qaeda network.
With
attention and resources focused on the anti-terrorism campaign some of
the land mines in the path of U.S.-European relations may be avoided, at
least for the time-being. A U.S. pullout from the Balkans today would not
provoke the dismay that it would have pre- September 11. Recent U.S.-Russian
cooperation has proved fruitful and Washington is now prepared to trade
Baltic entry into NATO for a kind of Russian quasi-membership in the alliance.
Such a solution will please the Europeans.
The
Bush administration has abrogated the Anti-Ballistic Missle treaty, but
in the light of the nature of the September 11 attacks Missile Defense
may be in for reevaluation or delays. This would be all to the good as
far as transatlantic relations are concerned. (European critics have pointed
out that since September 11 demonstrated that there are far easier and
more anonymous ways of hitting U.S. soil than ballistic missile attack,
MD supporters should admit that there is now only one compelling rationale
for such a system: to allow the U.S. to deploy military power where and
when it wants without fear of retaliation. Europeans also tend to believe
that most countries stocking missiles and/or weapons of mass destruction
do so not because they contemplate a suicidal first strike on the West
but because they do not want to be bullied by local rivals or the United
States.)
China
and the issue of Taiwan seem destined for a period of relative U.S. neglect
in the wake of September 11, which may be just what the doctor ordered.
China needs a calm international atmosphere, including a lack of tension
with the U.S., to continue liberalization and carry out the delicate transition
to the “fourth generation” of Communist party leaders. Given the damage
a major blow-up in the Taiwan Strait could do, benign neglect of China
will have fringe benefits for transatlantic relations as well.
Bush's
January 2002 “axis of evil” speech was a rude awakening for those who believed
that September 11 had brought the two sides of the Atlantic psychologically
and geopolitically closer together. Responding to the heightened sense
of vulnerability in the U.S. (or, some Europeans suspected, trying to keep
public attention focused on outside threats for domestic political purposes),
Bush appeared to be inaugurating a new, unilateralist phase of the campaign.
Two
aspects of recent U.S. policy have particularly disturbed Europeans, and
indicate the gulf in perceptions. The first is the conflation (or confusion)
of the campaign against Al Qaeda with the long-standing goal of U.S. hardliners
to “take down” Saddam Hussein. Where is the evidence, Europeans ask, of
Saddam's involvement in September 11, that containment of Iraq as practiced
since 1991 is no longer working, or that Iraq (assuming it were able to)
would be mad enough to attack Western targets (unless the regime faced
annihilation)? What many Americans would apparently view as the removal
of a growing threat, Europeans would see as kicking a hornet's nest in
the Middle East. The second disturbing feature of recent U.S. policy is
its backing of Israeli policy in the occupied territories in the name of
an undifferentiated “war on terrorism.” Europeans are appalled by Palestinian
suicide bombings, but even more so by what many see as Ariel Sharon's brutal,
strategically blind attempt to crush the Palestinian Authority (a policy
in place before the current wave of suicide bombings) with tacit U.S. support.
Today
the Israel-Palestine conflict could provoke a major transatlantic row.
But it is not the only risk. If there are more costly attacks on the U.S.,
Washington may be tempted to lash out and/or adopt draconian internal measures.
Few Europeans will be on board. Under the best of circumstances holding
a coalition together in an open-ended, long-term campaign against terrorism
will be a far more demanding task than conducting the Gulf War of 1991.
The temptation for the U.S. to go it alone in the face of what it sees
as European cravenness and/or lack of capabilities may prove hard to resist.
On
balance, September 11 and the war on terrorism will probably accelerate
the trend already visible beforehand: the emergence of a more unilateralist
America and its political twin, a more ambitious and autonomous EU. Certainly
nothing has happened since September 11 to lead Europeans to question their
often-stated view that the world is too big and complex a place to be run
by the U.S. on its own. Professor Walt was right after all.
John
L. Harper is professor of American foreign policy at the Bologna Center
of the Johns Hopkins University School of Advanced International Studies.
His book, American Machiavelli: Alexander Hamilton and the Origins of U.S.
Foreign Policy will be published next year.
Dispatch
from Canada
by
Ian Dowbiggin
When
it comes to the state of history in Canada,
there’s plenty of grim news to go around. Pessimists point to the fact
that few topics receive less attention in Canadian public schools than
history, buried as it often is in courses called Social Studies or Global
Education. In Quebec,
the province whose license plates read “je me souviens”
(I remember), teachers do little but harp on the theme of French-Canadian
victimization at the hands of English-Canadians.
It
gets worse. A recent survey found that only 51% of Canadians could correctly
name the country’s first Prime Minister (John A. Macdonald), and an appallingly
low 17% of Quebeckers (where the vast majority of Canada’s
French-speaking residents live) were able to name the nation’s first francophone prime
minister (Wilfred Laurier).
To
compound matters, the federal government in Ottawa
seemingly can’t resist the Orwellian temptation to airbrush the past in
politically correct fashion. Lately, it paid tribute to twenty-three
Canadian soldiers executed as cowards and deserters in World War I. It
also appears poised to proclaim Louis Riel, who was hanged in 1885 for
leading a rebellion against the federal government, a “Father of Confederation.”
The
attempt to rehabilitate Riel, a self-styled religious prophet and ex-mental
patient of mixed French-Canadian and Indian blood, indicates that official
Canadian historical interpretations usually bend in the direction of identity-group
politics, and nowhere does this kind of politics thrive more than in Quebec.
Hardly a year goes by there without some heated controversy over the past,
and 2001 was no exception. Normand Lester, a television reporter
with the French language side of the Canadian Broadcasting Corporation
(CBC), published Le livre noir du Canada Anglais (the Black Book of English
Canada), modeling it after The Black Book of Communism, a text originally
published in France that catalogued the millions of deaths due to the spread
of communism in the 20th century. For his part, Lester listed the many
real and imagined abuses English-Canadians have inflicted on French-Canadians
throughout history. After he was suspended with pay by the CBC, Lester’s
supporters rallied to his side and helped to make his book a provincial
bestseller.
The
controversy over Lester’s book indicates that though other Canadians argue
among themselves, the country’s principal fault line still runs between
English-Canada and French-speaking Quebec. The Lester debate also demonstrates
that some Canadians chiefly regard the past as a fertile land hiding sinister
tales of abuse and victimization.
Indeed,
like all nations, Canada has its share of dark secrets. Take the story
of its flirtation with eugenics. In the early 20th century, many in Canada’s
elite classes were impressed by the spread of eugenics, a term coined in
1883 by Francis Galton, Charles Darwin’s cousin, and defined as the science
of human breeding. Like thirty American states, and Hitler’s Third Reich,
two Canadian provinces (Alberta and British Columbia) enacted eugenic laws
that permitted the forcible sterilization of individuals deemed unfit to
reproduce and raise families. When over 700 victims of Alberta’s law sued
the province for damages in the late 1990s (and settled in 1999), the country
learned for the first time about its eugenic past. A public outcry ensued
as Canadians asked: how could it have happened here? The answer more often
than not was that right-wing Albertans were responsible. But now, thirteen
women victims of British Columbia’s sterilization law are suing that province,
raising further questions that Canadians may have more difficulty answering.
The
doleful and sordid story surrounding British Columbia’s experiment with
eugenics might seem just another in a series of events teaching Canadians
to be embarrassed about their history. But there are indications that a
deeper and more appreciative attitude toward Canada’s past may actually
be evolving. Perhaps the most positive sign is Canada: A People’s History,
a bilingual television documentary, now in its second season. Canada: A
People’s History chronicles Canada’s growth from the arrival of the aboriginal
Amerindians to the present day. In its coverage of the pre-film era, it
relies on remarkably effective recreations of events, such as the 1759
Battle of the Plains of Abraham, where France lost its North American colony
to the British.
To
Mark Starowicz, the documentary’s creator and executive producer, “the
myth that Canadians are not interested in their history died” the day the
series debuted. Noted Canadian historian Jack Granatstein, author of the
best-selling Who Killed Canadian History? (1998), admits he and
probably every historian has complaints about the series; but he adds that
it has made “people, students particularly, much more interested in history.”
Statistics
back up Granatstein and Starowicz. The show’s first episode attracted 1.67
million viewers, unheard of for a documentary in this country, and its
first season resulted in surprisingly few spats between the French- and
English-speaking producers who collaborated on the project. There is even
talk that PBS in the United States is planning a similar epic series, and
with that end in mind sent a team to Canada to consult with Starowicz and
his associates.
Ratings-wise,
the documentary’s second season has been less successful than its first,
but only marginally. As the narrative approaches the present day, rumors
are rife of sparks flying between the francophone and anglophone producers.
But these raw nerves are offset by the fact that the cooperation between
the two groups was precedent-setting, given that the country’s two language
communities have such divergent visions of Canada’s history. In the final
analysis, the spirited debates triggered by Canada: A People’s History
may not lead to public consensus about what happened in the past and why.
But at least people are interested in history.
Ian
Dowbiggin is professor of history at the University of Prince Edward Island.
He is the author of A Merciful End: The Euthanasia Movement in Modern America
(Oxford University Press, forthcoming).
Join
the Historical Society and subscribe to Historically Speaking
A
Darwinian View of Warfare
by
Doyne Dawson
There
are probably few who now remember that the United Nations saw fit to designate
1986 as the “International Year of Peace.” Perhaps the most lasting result
of this proclamation was to inspire a score of social scientists to assemble
for an international conference in Spain
where they drafted the “Seville Statement on Violence.”[1]
This document, modeled on the United Nations Educational, Scientific and
Cultural Organization’s (UNESCO) 1950 Statement on Race, was later adopted
by UNESCO, and has been endorsed by the American Anthropological Association,
the American Psychological Association, the American Sociological Association,
and literally dozens of other professional organizations of social scientists
around the world, and has remained a sort of semi-official ideological
brief for the international peace movement to the present day. The United
Nations, indefatigable on behalf of causes that do not cost much, proceeded
to proclaim the year 2000 yet another International Year of Peace, and
the entire decade 2001-2010 the first “International Decade for the Culture
of Peace.”
As
the International Decade for the Culture of Peace would appear to have
got off to a rocky start, another look at the Seville Statement may be
timely. The stated purpose of this short text was to “challenge a number
of alleged biological findings that have been used . . . to justify violence
and war.” Among other heresies it was declared “SCIENTIFICALLY INCORRECT
to say that war or any other violent behavior is genetically programmed,”
or “that in the course of human evolution there has been a selection for
violent behavior,” or “that humans have a ‘violent brain.’”
The
Seville Statement was directed against the revival of Social Darwinism, then
generally known as “sociobiology,” whose impact on the social sciences
and historical studies was just beginning to be felt. The new Darwinism
appeared to challenge a set of assumptions about human nature, sometimes
labeled the “Standard Social Science Model,” which had long held credal
status in the minds of left-wing academics: that human nature is utterly
unlike the rest of the animal world, that human culture is almost totally
free from biological constraints, that both are indefinitely malleable
and therefore theoretically perfectible. Hence the Seville Statement took
it for granted that Darwinian studies of culture were intended “to justify
violence and war” and “condemn humanity to war,” though it failed to name
a single evolutionary biologist who had implied any such thing.
The
Seville Statement is pseudo-science. The Seville
conference did not give rise to any research program and was not intended
to; it produced instead a political campaign to collect endorsements from
influential organizations. The authors intended a preemptive strike at
biological discoveries that had not yet been made, closing off the evolutionary
study of war and violence and, by implication, evolutionary approaches
to the study of many other areas of human behavior.
During
the 1970s Jane Goodall, a British ethologist, conducted in East Africa
the first close field studies of wild chimpanzees and reported that these
apes were not exactly amiable and peaceable vegetarians, as everyone thought.
Chimpanzees regularly hunt small game and eat a great deal of meat, which
is shared with other members of their band. They also practice a kind of
organized and lethal group conflict which, if not the same thing as human
warfare, looks very much like the evolutionary threshold of it. The most
arresting thing about these behaviors is that both hunting and “warfare”
are primarily male activities, as among humans. In other carnivorous species,
females hunt at least as actively as males, and among lions, more so; but
among chimpanzees the males are responsible for nearly all the kills of
larger game such as monkeys. Bands of males also routinely patrol the borders
of their territories and conduct stealthy raids into the territories of
neighboring bands, where they ambush and kill solitary individuals.
This
behavior, unique among primates, seems to be the result of a peculiar social
organization. Chimps are one of the very few primates in which females
leave the natal group at maturity to join other bands, while their brothers
remain in it. Most other primate societies are bands of related females
with their attached males, but a chimp society is a band of related males
with attached females. This apparently changes the dynamics of male aggression.
Coalitional violence becomes an evolutionary advantage.
There
is one other primate that practices male-coalitional aggression. In all
the known primitive cultures, hunting and warfare are male activities—indeed
the essential male activities—and the badges of masculine identity. There
have been many unconvincing attempts by feminists to deny this fact or
minimize its significance, mostly based on unrepresentative evidence, or
a failure to distinguish what is peripheral from what is essential in sex
roles. In hunting-and-gathering cultures women may trap small animals,
an activity more like gathering than hunting. But no such culture has ever
been found where men do not, under all normal circumstances, monopolize
big-game hunting. Nor is there any band or village culture known where
men do not normally monopolize fighting in war. Also the great majority
of hunting-and-gathering cultures are patrilocal—men stay in their natal
group and women marry out of it, as with chimpanzees.
Parallels
between human and chimpanzee social behavior should not be surprising.
By the time Jane Goodall published her findings, molecular analysis had
established that Homo sapiens and Pan troglodytes are very closely related,
sharing 98% of their genetic makeup, and are descended from an as yet undiscovered
common ancestor that lived perhaps as recently as the dawn of the Pliocene,
five million years ago.[2]
Natural
selection works only in small, closely related groups, so the supposed
tendency of our primate ancestor to group solidarity and coalitional violence
cannot explain the evolution of human warfare or human society, which has
displayed a consistent tendency toward larger and larger groups of unrelated
people. What sociobiologists argue is essentially that the ancient primate
tendency toward proto-ethnocentricity and proto-xenophobia provided the
genetic seed from which eventually grew the tree of war.
This
implies that at some point a learned cultural pattern must have been grafted
onto this proto-ethnocentric stock. Culture built upon the genetic predisposition
to help and defend one’s own kind and broadened the concept of “one’s own
kind” to include all identified by the same ethnic markers, such as common
language and customs. In short, cultural evolution has mimicked genetic
evolution.
Conventional
social science has been resistant to any such idea. The Standard Social
Science Model divides the human mind into two categories labeled “biological”
and “cultural,” with no bridges between them. It is assumed that all “biological”
traits must be fixed and invariable, whereas all flexible traits (including
all complex mental activity and behavior) must be “cultural,” picked up
from the environment.
Evolutionary
psychologists, on the other hand, conceive of the mind as a network of
psychological mechanisms which evolved to handle the specific adaptational
problems that seemed most urgent during the Pliocene and Pleistocene epochs,
such as mating, parenting, learning language, and group defense; these
mechanisms remain latent until aroused by the appropriate environmental
cues.[3]
Among
the problems that forced hominids into new adaptations there could have
been none more pressing than group defense. The defensive advantages of
ethnocentricity are obvious and irresistible, and so are the advantages
of forming the largest group possible. By the time the fully human and
cultural form of ethnocentricity was established, the main purpose of group
defense had become defense against other bands of men, rather than animal
predators. At that point a balance of power situation arose, requiring
the continuing escalation of warfare and continuing increases in group
size to maintain a margin of safety. Darwin thought warfare had been not
only a product of natural selection but also one of its major instruments.
Ethnocentricity leads to a process of natural selection in which tribes
compete with one another like different species and the better organized
tribes win; and this process, he suggested, has been the main factor behind
the evolution of the human moral and intellectual faculties and the key
to human cultural evolution. Darwin’s concept of warfare as an agent of
evolution rests in part on outmoded biology and dubious ethnography. Darwin
and his contemporaries were ignorant of the existence of genes and thought
acquired traits could be biologically inherited, so they made no clear
distinction between biological evolution and cultural evolution; Darwin
assumed cultural selection was basically the same process as natural selection,
though dominated more by group than individual choices. Natural selection
of course requires some populations to become extinct while other, better
adapted populations flourish. Darwin saw no problem there, for like all
his contemporaries he assumed primitive tribes were constantly at war and
had a high extinction rate. Modern anthropologists are not so sure. There
have been sectors of the tribal world where warfare almost never seems
to cease; but even when it is constant, tribal warfare is slow. There are
no conquests; there are processes of attrition in which a tribe may be
gradually pushed off its land, eventually to be broken up and absorbed
by its neighbors. A recent statistical study of warfare in the interior
of New Guinea (an area where the traditional practices of war continued
into the late 20th century) concluded that about 10% of all the ethnic
groups in the region (“ethnic group” was defined as any group capable of
waging war on some other group) became extinct, as distinct cultural entities,
in every generation. This seems an alarming rate of extinction; and yet
the authors also concluded that if cultural change depends on extinctions
it would take centuries for any serious change to occur. Therefore the
sort of natural selection envisioned by Darwin seems very unlikely, except
perhaps for very long-range changes like the rise of agriculture in the
Neolithic. Warfare is too inefficient an agent of selection.[4]
But
is not another kind of selection going on? Neo-Darwinian theory suggests
it is better to distinguish cultural selection and natural selection as
fundamentally different, though parallel, processes. In human populations,
information is stored in minds as well as in genes and there is a kind
of cultural selection that is different from natural selection and yet
behaves much like it. I leave aside the question of what a “culture” is.
For present purposes it may be sufficient to point to the fairly obvious
fact that there are persistent differences among human groups which are
created by culturally transmitted ideas, not genetic or environmental differences.
These cultural variations —the vital packets of information that distinguish
a group from others—can be preserved long enough to become gene-like units
of selection, though we do not understand exactly how cultural information
is stored or how the information packets are transmitted, with modifications,
from one generation to the next. It is clear that the transmission is fast
and combinatorial, and this is what makes natural selection difficult:
there are no particular traits that stay fixed long enough to be selected
by nature in the Darwinian way.[5]
On the other hand, the very speed and instability of cultural group selection
facilitates diffusion, and therefore makes it possible for cultural evolution
to operate much faster than biological evolution.
Let
us reconsider, and try to bring up to date, Darwin’s notion that warfare
is or has been an agent of cultural evolution. There appear to be at least
two ways this might work. First, warfare not only enforces social cohesion
within the group, but also accelerates the course of cultural evolution
toward greater complexity. Warfare forces alliances between bands and villages,
constantly expanding the social web and eventually forcing the formation
of chiefdoms and states. Second, warfare encourages technical innovation.
It is sometimes said that war cannot be a Darwinian process because the
effects of competition are blunted by diffusion. But this causes a successful
adaptation to spread much more quickly than it could by the glacial process
of group extinction. When the first tribe took up archery its neighbors
will have imitated it quickly. The diffusion of the bow and arrow during
the Mesolithic may in fact have been the first effect of intensive warfare
that shows up in the archaeological record.
But
hold on. I have distinguished two levels in the evolution of warfare, the
first genetic and the second cultural. Since the cultural explanation seems
quite adequate by itself to explain all the phenomena of warfare, why is
it necessary to assume a genetic base? As soon as we reach the stage of
language, culture, conscious ethnic rivalry, and the balance of power,
genetic considerations seem to become unnecessary as explanatory categories.
Is it not possible that warfare is entirely a product of culture, as the
Seville academics asserted, and as practically all social scientists at
that time would have agreed? But if we view the key factor as belligerent
ethnocentric feeling, rather than actual warfare, it becomes harder to
argue against an innate tendency. No society that was not ethnocentric
has ever been found, and it is difficult to imagine what such a culture
would be like. The so-called peaceful (better, minimally warlike) cultures
like the Bushmen and the Semai are fiercely ethnocentric, and there is
no reason to doubt they would return to warlike habits if that became feasible.
A culture’s degree of warlikeness depends simply on environmental conditions.
One can agree with the Seville Statement that we do not have a violent
brain. But we do have a clannish brain, and that can all too easily amount
to the same thing. It is much easier to explain the ubiquity of war if
we assume that a biological tendency to ethnocentricity was already there.
One
effect of a Darwinian view of warfare is that the question of the causes
of war, so much debated among anthropologists (and usually posed as a question
about the importance of economic motives), comes to seem relatively peripheral.
The important thing is that there is a low threshold for violence. The
specific trigger that activates it will vary: motives of revenge and honor
are probably dominant in the simplest societies, while material motives
doubtless become more important as the permanence and density of settlements
increase.
The
continuing evolutionary study of human psychology and the human genome
will probably tell us eventually whether the Darwinian perspective on warfare
is correct. If it is, what are the implications for ethical and political
thought? If it turns out that the tendency to warfare is innate, this certainly
would not “justify” warfare in general, still less justify any particular
war. And, so far as I can see, those who espouse pacifism as a personal
ethical position would be as free to do so as they are now. But I think
the triumph of Darwinism would mean that pacifism as a political program
is simply wrong. If we wish to control warfare, we need a better understanding
of what methods will work, and these methods must not be based on the false
assumptions that human nature is infinitely malleable and that all conflict
can be ended.
The
threat of war will never go away, and peace can be achieved only by strenuous
political effort. It will not be had by promoting “peace studies” or “peace
education,” which indeed are likely to do harm by spreading complacency.
The most far-reaching implication of Darwinian psychology is that it is
time to abandon the utopian visions that have provided the steam for much
of the radical Left for two hundred years—Marxist socialism, radical feminism,
pacifism. It tells us to suspect all solutions that depend upon the creation
of a “New Man” or a “New Woman.” There will remain plenty of room for every
type of practical ameliatory social reform, but limited room for societal
restructuring. We will probably become more skeptical about promises that
we are at the “end of history” on the grounds that liberal democracies
will not go to war with one another. As for historians, the success of
the neo-Darwinian revolution may have the effect of restoring political,
diplomatic, and military history to their former prestige, but it would
probably have no implications for the practice of history. Only the broadest
of macro-historians have ever bothered themselves with questions like these;
and military historians have generally taken it for granted that warfare
is, in some sense or another, in the nature of man.
Doyne
Dawson is professor of international relations at Sejong University. His
most recent book is The First Armies (Cassell, 2001).
[1]
The text of the Seville Statement on Violence is available on the Internet
and has been reprinted many times, e.g. in Jo
Groebel
and Robert Hinds, eds.,
Aggression and War: Their Biological and Social
Bases (Cambridge University Press, 1989), xiii-xvi.
[2]
W. G. Kinzey, ed., The Evolution of Human Behavior: Primate Models (State
University of New York Press, 1986) is a good starting point for anyone
wishing to know more about this subject. The case for the Darwinian view
of war is presented in detail by Richard W. Wrangham and Dale Peterson,
Demonic Males: Apes and the Origins of Human Violence (Houghton Mifflin,
1996).
[3]On
Darwinian psychology see J.H. Barkow, Leda Cosmides, and John Tooby, eds.,
The Adapted Mind: Evolutionary Psychology and the Generation of Culture
(Oxford University Press, 1992); and Henry Plotkin, Evolution in Mind:
An Introduction to Evolutionary Psychology (Penguin Press, 1997).
[4]Joseph
Soltis, Robert Boyd, and Peter J. Richerson, “Can Group-functional Behaviors
Evolve by Cultural Group Selection? An Empirical Test,” Current Anthropology
36 (1995): 473–494.
[5]This
is the main difficulty with thinking of culture as a bunch of “memes” or
particular cultural traits analogous to genes. There has been much discussion
over the concept of the “meme,” a term coined by Richard Dawkins in The
Selfish Gene (Oxford University Press, 1976); see Robert Aunger, ed., Darwinizing
Culture: The Status of Memetics as a Science (Oxford University Press,
2000).
Join
the Historical Society and subscribe to Historically Speaking
Hostages
in the Classroom[*]
by
Elizabeth A. Dreyer
When
a young man entered my classroom at FairfieldUniversity
at 4:00
p.m. on Tuesday,
February 12, 2002, and announced that he had a bomb and
that we were being held hostage, we took him very seriously.
No one in the class had ever met him.Only later
did we learn that his name was Patrick Arbelo, and that he was a 2001 graduate
of FairfieldUniversity.
His choice of our classroom was random. He did not know any of us.A short
while into the ordeal I had an intuition that this young man was not dangerous
and did not intend to harm anyone. But we had
to play it “by the book.” The stakes were simply too high.
Mr. Arbelo’s
basic demand was to have a brief statement and a list of five books read
over the radio. The statement, unclear and disorganized,
contained elements of anti-Semitism.
Within a few minutes of his entering the classroom, Mr. Arbelo
told three students to leave, then two more. I requested that several other
students who had health issues or who were very upset be released. After
several hours, I was able to negotiate their release.
The
time moved both slowly and quickly. After about five hours, I began to
wonder if we would be there all night.On the
other hand, I was very busy mentally, which made the time pass quickly—staying
alert to every detail; periodically making sure that each student was okay;
constantly encouraging and,
supporting students in an attempt to alleviate their fears;
and following the conversation between Mr. Arbelo
and the police hostage negotiator over a two-way
radio.Coincidentally, right before Mr. Arbelo
entered the room, I had been describing the two perspectives within which
ancient Greek culture viewed time—as chronos,
or ordinary time (I remember giving as an example Tuesday, February 12
at 3:45
p.m.) and kairos, or special time, time that is
pregnant with meaning and possibility. I wrote these two Greek words on
the board in large letters and we stared at them for almost seven hours!
Afterward, I asked the students if they would ever forget
the meaning of the Greek term, kairos. I received a loud “NO!” from
everyone.
There
was occasional levity. One student wanted to know if this experience meant
that everyone would get A’s for this course. Another wrote a note that
she had recently commented to a friend how uneventful her life was! At
one point, I told the students that they could consider Lent done and proceed
directly to Easter as soon as we got out.
One
aspect of the study of history is that we “meet” figures from many different
settings and time periods who have overcome various challenges. The subject
matter of this class, “Voices of Medieval Women: Silent No More,” provided
a context with several links to our crisis experience. The centerpiece
of the course is a critical reading and analysis of primary sources in
translation written by medieval women mystics. We examine the historical,
ecclesial, social, economic, and political contexts in which each text
was produced. But we begin the course with a contemporary piece. The previous
week we had had a spirited and engaging discussion of Mark Salzman’s novel
Lying Awake (2000) about a group of contemplative Carmelite nuns living
outside Los Angeles. The protagonist, Sister John of the Cross, is faced
with some very difficult challenges in the course of the novel. Little
did we know that the very next week, we would be faced with some of our
own. The assignment for the following week was the Passio Perpetua, a prison
narrative of a Christian catechumen in Carthage, North Africa, around the
year 200. While our setting was totally different from Perpetua’s, the
experience of being imprisoned was shared.
Having
taught graduate students for many years and coming only recently to undergraduate
teaching, I am constantly struggling to find meaningful ways to link the
past with the present that these young people experience— to find “hooks”
in students’ knowledge and experience that will help them see history as
a complex, lively, and interesting story that ends up in the present with
their own stories. To this end, I use experiential learning, various media,
and try to engage students in the search for common ground in the human
condition. In the hostage crisis, I think students discovered that some
of the ideas we were studying could be used to help cope with the situation.
My
work in historical theology also served me in many ways throughout this
crisis. The Christian mystical tradition is a rich strain in Western culture
that is filled with wisdom about how to live. Writing about and teaching
these texts has been an important part of my identity as a person, as a
woman, and as a professional. I find in them a certain clarity— like good
poetry. The mystics invite me to keep focused on what is truly important
and remind me not to “sweat the small stuff.” This perspective served me
well during the hostage crisis in which some of the students actually prepared
to die.
Medieval
women mystics wrote about God and love and the human struggle in vital
and creative ways. These women were grounded in a way that allowed them
to trust and live in peace no matter what they were up against. These visionaries
make a claim on my life in ways that proved relevant to the crisis in which
we found ourselves. They teach that in the end, God can be trusted to hold
us up in love and care and that we can do the same for each other.
We
moved the class to another building, but it was impossible not to feel
a little on edge every time we came together. Since six students were ill
that day, we first had to hear from them and allow those who were in class
that day to respond. Both groups were honest, gracious, and understanding.
I met with each student individually to see how things developed during
the rest of the semester. I also altered the syllabus and writing requirements
since many including myself) found it difficult to concentrate in the days
that followed. Students fell behind in all their classes, and even though
teachers were alerted, some students still felt anxious about the potential
for slipping grade point averages. Many are preparing for jobs or graduate
school next year. I added a showing of Carl Dreyer’s The Passion of Joan
of Arc (with foot-long submarine sandwiches and soft drinks) and scheduled
a class in our chapel during which students experienced various forms of
meditation popular in the medieval period. Together we planned a grand
party after our last class, ending the semester with good as well as traumatic
memories.
I learned
a lot—too much to detail here. I learned that we live life together. We
depended on campus security, local police, FBI, administrators and staff,
counselors, and parents to get us through. Their care, expertise, and professionalism
were amazing and welcome. I learned how helpful it was that the many communities
to which I belong chose to show their support through phone calls, e-mails,
a question about how we were, a hug in the hallway, flowers, and notes.
I guess I am not a Catholic for nothing—my psyche is oriented to the visible,
tangible, sacramental symbols of care and concern and it helped me move
through this with a modicum of grace.
I learned
that university students in my class are strong, savvy, resilient people.
Each one did something important to move the situation forward and no one
did anything that jeopardized the safety of others in any way. They are
an impressive lot, and I feel more confident about the future as a result
of this experience. I have also “met” these students in new and welcome
ways. This is a gift indeed, and I feel proud to know and be associated
with these young people. We have a kind of bond that would never have existed
without the shared trauma of being held hostage together—one of the many
ironies of life.
Beyond
being alert and using common sense about strangers or troubled students
on campus, we can’t “prevent” such an event from happening. By their nature,
universities are meant to be places of openness and dialogue— places that
offer welcome and hospitality to both the stranger and the fresh idea.
In
retrospect, the thing that proved most important was calm. Students told
me that it helped that they did not see fear or panic in my face. They
needed someone to be in charge. One student said, “You never became one
of us.” I did not realize until afterward how much the students were relying
on me to get through this ordeal. I think that each professor caught in
such a situation must rely on her own particular gifts, resources, and
personality. I am a tactile person, so I held hands, touched shoulders,
rubbed backs, and encouraged students to hold on to each other, especially
those who were most afraid.
Since
September 11, I have often been reminded of the mix of terror and beauty
that characterizes human experience—often in dramatically uneven ways.
Those across the globe who experience the most extreme forms of terror
are often, strangely, the ones who teach this truth best. In the midst
of crisis, it is good to remember the simple, tender, touching moments
life puts before us. Life is never just one thing and we shouldn’t demand
that of it in order to embrace it.
Elizabeth
A. Dreyer is professor of religious studies at Fairfield University. She
is the editor of The Cross in Christian Tradition: Paul to Bonaventure
(Paulist Press, 2000).
[*]On
the afternoon of February
12, 2002,
Patrick Arbelo entered a FairfieldUniversity
classroom and claimed to be holding a bomb. Professor Elizabeth Dreyer
and twenty-six students were in the classroom. Seven hours later, a hostage
negotiator from the Fairfield
police department convinced Mr. Arbelo to
leave the classroom, and he was taken into custody. No one was hurt.
LETTERS
To
the Editors,
Roger
Emerson is a valued friend and colleague, who knows
far more about Enlightenment in Scotland
than I ever shall. The issue between us is whether I can write of this
subject—especially when using the phrase “The Scottish Enlightenment”—without
binding myself to depict it in all the fullness of which he is capable,
so that I may be at fault if I do not attempt this. I shall try to show
that this question is based on a methodological confusion: a
reification, based on a widespread misuse of language, which has
so far bedeviled his thought that he several times accuses me of saying
things which I specifically said I was not saying. How has this confusion
come about?
On
the cover of the February 2002 Historically Speaking
in which Emerson’s communication was published, there appeared the words,
“What Was The Enlightenment?” They were not Emerson’s; he had chosen his
own title; but they offer a good starting-point for discussion of this
question. It is my position that we do better to avoid the term “The Enlightenment;”
not because there was no such thing as Enlightenment but because there
were too many things going on to which it is helpful to apply the term,
and consequently too many ways in which it is useful to employ the word,
to leave it desirable to lump them all together and treat them as a single
thing, a process or phenomenon; a step which calling them “The Enlightenment”
encourages and even obliges us to take. The definite article—the word “The”
itself—is, I am persuaded, one of the most dangerous words in the historian’s
vocabulary. It converts adjectives into nouns; it makes us believe in unified
entities, of which unitary descriptions must be found and unified causes
supplied. “The Enlightenment,” like many other such constructs, becomes
a single process—no matter how complex—of which one description must be
more correct than another, and to which particular phenomena do or do not
belong, according to the description adopted. I wrote The Enlightenments
of Edward Gibbon because a great historian, Franco Venturi,
adopted a description of “Enlightenment” which he could not find in England,
and was obliged to choose between excluding Gibbon from Enlightenment and
excluding him from England.
The
problem is that of the article, and the noun which it creates. The adjective
“Enlightened” is not impoverished, but over-rich; there are a great many
ways in which it makes sense to use it, and to offer explanations of how
one is using it and why. This does not make it a fiction or fancy of the
historian’s invention. The actors in the 18th century of whom we use it
did not, indeed, speak of “The Enlightenment,” or very much of “Enlightenment”
in the abstract sense. But they frequently employed the metaphor of light—very
often lumière—and used it
in denoting complex historical changes in which they saw themselves involved.
By the end of the century, as we know, Immanuel Kant could be asked, and
attempt to answer, the question, “was istAufklärung?”—but
he did not say “die Aufklärung,”
and histories may be written of how such reifications as “die Aufklärung,”
“the Enlightenment,” and “les lumières”
(the plural makes this an exceptional case) came into being. In general,
one may suspect, they were created by historians, who saddled themselves
and us with reified abstractions of which final definitions and/or comprehensive
descriptions are supposed possible and made obligatory. We thus make demands
on ourselves which it is sometimes neither helpful nor possible to meet.
To
liberate ourselves, and escape Venturi’s
dilemma, I proposed and practiced a discourse of “Enlightenments” in the
plural, and followed Gibbon through a number of cultural encounters to
which the term might usefully be applied. To have gone further, and proposed
abandoning the noun “Enlightenment” and using only the adjective “Enlightened,”
would have been possible but would have involved the unmaking of too much
that is established in historical discourse, including my own. But the
possibility is worth considering because it enables us to see that there
were a great many things going on which are called and worth calling “enlightened,”
and a number of processes taking place which it is useful to call “Enlightenments.”
These varied from case to case, and occurred in diverse contexts (some
of them national, which in the eyes of some critics—not Professor Emerson—is
a very horrid thing). At this point there arises the question of how one
is using the vocabulary of Enlightenment and why one is using it at all;
and I meet the objection that unless I use it with reference to a single
set of meanings, I am using it irresponsibly, destructively, or both. To
this I reply that I use it to denote a number of things going on in Western
European culture in a century beginning about 1680; things highly diverse,
often contradictory, but associated with each other, for reasons to which
the terms “enlightened” and “Enlightenment” may intelligibly be applied.
As we use this vocabulary, however, its implications will shift, which
is not to say that they will disappear. We must stand ready to give an
account of what we mean by calling these things “enlightened” and “Enlightenments,”
of whether others saw them as we do, of how they may be connected with
each other and of why they seem to have occurred when and as they did.
Since our vocabulary is in part instrumental, it will be rich. But we are
not obliged to reduce “Enlightenment” to a single set of concepts, or to
depict “The Enlightenment” as a unified historical process. To any of the
questions we pose to ourselves, there may be more than one set of answers;
history, after all, is like that.
The
object of the exercise, then, is to find ways in which the word “enlightened”
may be used, and may have been in use, to describe things happening in
1680-1790; to find processes going on to which we may apply the term “Enlightenment”
if—as seems probable—it was not then in use. None of this implies a search
for The Enlightenment, and if we elect to use the word we must be accountable
for our use of it. Some of the processes we describe will be hypothetical,
in the sense that we say it makes sense to imagine and describe things
in this way and must justify ourselves by showing that it continues to
do so. We are not required to populate the historical landscape with a
ballet of semi-visible dinosaurs.
The
power of reification, and of the definite article to induce and impose
reification, is, however, very considerable among historians, as among
other language-using humans. There are moments when Professor Emerson seems
to me so far its victim that he takes my explicit statements that I am
not reifying “Enlightenments” as evidence that I am reifying them, but
in the wrong way. Thus he says that I offer a “location of a specific time
and place at which the Enlightenment may be said to begin in Europe—at Utrecht
in 1713.” Now even with the most charitable interpretation of the words
“may be said,” this is precisely what I am not doing. Since I do not hold
that there existed any such thing as “the Enlightenment,” I cannot offer
any “specific time and place” at which it could begin. What I do offer
is a contemporary concept of civility in Europe which associates it with
a plurality of states exchanging goods, manners, and ideas with one another;
I show this concept set out in the writings of Hume, Robertson, Smith,
and Gibbon, and I show how it was associated with the defeat of the “universal
monarchy” attributed to Louis XIV. I suggest that it is instrumentally
useful to imagine a “Utrecht Enlightenment”—“a” does not mean “the”—between
1714 and 1789, existing in the minds of some Enlightened writers and worth
employing in our own thinking. At the same time I point out that Voltaire
attributed something very like Enlightenment to that same universal monarchy
and its effects upon Europe, and that Hume
wrote an essay navigating between the two theses. I am more interested
in the existence of rival concepts of what we would term “Enlightenment,”
then and now, than in the absurd pursuit of the “time and place” at which
“it”—The Enlightenment—“may be said to begin.” The question is whether
Roger Emerson really believes I am engaged in the pursuit, and if so, why.
In
seeking to pluralize “Enlightenment,” and use the word to denote a diversity
of related phenomena, I employed the tactic of writing of a number of “Enlightenments”
supposed to exist in a number of contexts. Among these were “the Arminian
Enlightenment,” “the Parisian Enlightenment” (the one known to us all),
“the Utrecht Enlightenment” (far more a concept than a phenomenon), and
others mentioned in The Enlightenments of Edward Gibbon and its
successor. “The Scottish Enlightenment” enters my narrative later, for
the reason that Gibbon encountered anything that might be known by that
name only as the shaping of his Decline and Fall was affected by
the historical writings of Hume, Robertson, Smith, and Ferguson.
He wrote: “On this interesting subject, the progress of society in Europe,
a strong ray of philosophic light has broke from Scotland in our own time”
(Decline and Fall, ch. 61, n. 69).
It could be said here that he was writing about Enlightenment in Scotland
, but to say that he was writing about “the Scottish Enlightenment” would
be to put a further strain on language. He does not use the term, and there
is no sign that he thought of “it” as a concrete phenomenon or process.
Had he done so, Professor Emerson would be entitled to ask whether his
description of “it” was correct or adequate. I used the term, and Emerson
argues at length that I give a description of “it” which is neither.
Perhaps
it would have been better, or at least safer, to avoid prefixing the definite
article to the various “Enlightenments” I posited, since the fatal word
“the” was liable to induce in every case the sort of reification I am trying
to avoid. I hoped, I suppose, to use it to indicate that the word “Enlightenment”
could denote different though not incongruous phenomena in different contexts,
and that the insertion of an adjective between article and noun might indicate
a need for specificity. If this is not enough, we should avoid the noun
“Enlightenment” altogether, since its presence makes the use of an article
virtually unavoidable. When I employed the term, “the Scottish Enlightenment”—Gibbon,
by the way, never visited or lived in Edinburgh or Glasgow as he did Lausanne,
London, and Paris—I did so adjectively and adverbially, to denote something
that needs to be understood as context to the great historical writings
important to Gibbon. Roger Emerson complains, bitterly and at length, of
my writing in this way. He sees “the Scottish Enlightenment” as a complex
concrete entity, of which a complex but unified description both can and
should be offered; and he is so deeply convinced that I ought to have offered
such a description that he treats me as if I were doing so. Each of my
refusals to deal with “the Scottish Enlightenment” in the way he demands
is treated as if it offered a description of the whole, which inevitably
turns out to be inadequate. The extraordinary result is that he ends by
accusing me of doing “what [I have] accused others of doing: ‘bringing
[the various Enlightenments] within a single formula—which excludes those
it cannot be made to fit.’”
Once
again, I was not doing what I said I was not doing. I am studying a history
of historiography, to which certain aspects of what may be and has been
termed “enlightenment” in Scotland
and elsewhere were and are highly relevant. I attended to those aspects
and what can be called Enlightened about them, and when I used the term
“[the] Scottish Enlightenment” it was as shorthand to this purpose. I do
not (let me add) say that no general description of “the Scottish Enlightenment”
is possible; I think it would become a number of descriptions of a number
of phenomena, but there would be ways of associating them with one another.
I merely claim the right to deal with those forms of Enlightenment in Scotland
that are to my purpose, and I am under no obligation to deal with those
that are of another character. Emerson, convinced that I should be
describing “the Scottish Enlightenment” as a whole, comes to insist that
I am doing what I should be doing, and am therefore doing it wrongly.
Even when, in his concluding sentences, he appears to recognize what I
am in fact doing, he calls for “a fairer account of the Enlightenments
on which [I draw],” and remains the prisoner of his assumptions that all
such Enlightenments are unitary entities and to speak of them at all is
to incur the obligation to describe each entity in its entirety. I have
warned the reader not to expect this.
J.
G. A. Pocock
JohnsHopkinsUniversity
Roger
Emerson replies:
We
are all indebted to John Pocock for many
things—for perceptive studies of political concepts, for showing us the
importance of the “Machiavellian moment,” and now for interesting analyses
of the historiography of Gibbon and those on whom he drew. What I do not
think Professor Pocock has given us is a
clearer understanding of the varied processes which many, but not he, would
call “The Enlightenment.” He thinks the word “the” tends to reify concepts
and to give an illusory ontological status to what really were processes
that differed from place to place, time to time, and from thinker to thinker.
At the same time, he is unafraid to describe the processes as a complex
and rich set of developments occurring in Europe
between ca.1680-1790, which, in various contexts, resemble
one another like members of a family. Well, if the family is a proper analogue,
then there is a common gene pool and basic ways in which they are all similar
enough to be described as related. To describe the processes of enlightenment
with such an analogy leads to talk about “a” or “one” or “the” family which
is just as reifying—but no more so—than to deal with The Enlightenment,
said to vary over time, from place to place, and from thinker to thinker.
I do not think he is a better nominalist
than I.
I
clearly understand that Pocock is dealing
with historiographical traditions and with
what Gibbon appropriated from others. This is not an issue which divides
us. What does divide us is the fact that Professor Pocock's
Enlightenments are characterized too narrowly and without regard to the
systematic character of the thought structures of the age. As I wrote in
the February 2002 Historically Speaking:
Pocock
is interested in the traditions and men from whom Gibbon drew, but
his work often reads as if he were giving a character to the Enlightenment
or Enlightenments in which he situates Gibbon. His exciting, valuable,
and erudite volumes will be taken by many to be authoritative statements
about the Enlightenments with which he deals as well as about the Enlightenment
of Edward Gibbon. As a guide to Gibbon, we may perhaps trust him;
I think we should not when we think about the Enlightenment and particularly
the Scottish one.
He
has shaped the issue dividing us as being whether or not he can deal with
“Enlightenments” without treating them in their fullness but only as Gibbon
saw them and used their contents. Of course he can, but he cannot write
as if what Gibbon saw—and what Professor Pocock
wants to discuss—was all there was to see and talk about. Gibbon's Scottish
Enlightenment, and Pocock's own, do scant
justice to the culture of the enlightened in Scotland
or in other of their principal settings. It is like looking only at the
fact that the family were all blondes but
ignoring that they were mostly all tall, freckled, and gap-toothed. Pocock
chooses to interest himself in the creation of “concepts of civility” associated
with a plurality of states “exchanging goods” within a balanced system
in which universal monarchy had become impossible. This he sometimes characterizes
as “the post- Utrecht Enlightenment” in which “politeness and commerce
flourish together, and do so better in a commonwealth of independent states
than in a universal monarchy ancient or modern ... ” (II:189). He has interesting
things to say about this and about how and where Gibbon figures in the
discussion of such issues—ones which Pocock,
like Franco Venturi before him, has long
deemed central to the intellectual and political development of Europe
in those years. But there is no mistaking the limited character of the
Enlightenments with which he is concerned and which he studies or assumes.
These Enlightenments are, from the beginning (I: 6-10), moral, political,
and religious in character.
We
hear little if anything of enlightenment in epistemology, science, or even
of such socially important things as patronage and improvements. Those
are not just Gibbon's exclusions but the exclusions of John Pocock.
They work to reduce his various Enlightenments to a single formula which
omits what cannot be made to fit. Eliminated is what does not interest
the 18th-century historian or the 21st-century professor of the history
of political thought. It will not do to say that Professor Pocock
is not alluding to a whole when he writes passages such as:
The
strategy of Enlightenment in Scotland
was the development of a science of morality, which on the assumption that
humans were intrinsically social beings became a science of society in
all its ramifications. It ... took the form of jurisprudence, which was
then organized into history and next—the most importantly but less immediately
so for the purposes of the present volume—into political economy
.... The Enlightened, however, seized the high ground; they exhibited
morality, sociability and as we shall see history, as systems intelligible
in themselves, and without bothering to deny the efficacy of grace, left
it to attach itself as best it could to phenomena that could be understood
without it (II:313).
When
Professor Pocock cites modern historians
of Scotland,
he tends to cite those whose work is similarly narrow in conceiving enlightenment
and the Scottish Enlightenment—partly because it is often somewhat derivative
from his own. He and they have unduly narrowed the Scottish Enlightenment
by excluding from the processes they talk about things which were central
to it and central to Enlightenment and Enlightenments throughout much of Europe.
All
this does bear on historiography both in Scotland
and elsewhere. The conjectural histories sometimes occurred within a natural
history of man—those were Baconian collections
of facts about men on which inductions rested. The context for such a view
was a scientific one. The anthropology used to describe human nature was
not without similar relations to the science of the period, as many discussions
of race and development show. The methods of the moral sciences—as they
were conceived by George Turnbull, David Hume, Adam Smith, Adam Ferguson,
and others with whom Pocock has dealt—were
often explicitly related to the methods or ways of thinking of the scientists.
And the fact that the men with whom he deals were often put in place by
others with particular patronage aims is not irrelevant to the careers
of Smith, William Robertson, or Ferguson.
Nor is it irrelevant to the fact that they wrote the kinds of histories
they did. Gibbon was not, like Robertson, urged by Lord Bute
to write histories of a certain sort; nor, like Ferguson,
did he write one which was to be applauded by his friends who favored a
Scottish militia. John Pocock's Enlightenments
when they omit such things are too narrow. When he as an historian ignores
them, he is not helping us better to understand Gibbon or the traditions
from which he drew and among which he worked.
Roger
L. Emerson
University
of WesternOntario