Recently in Philosophy Category

Alasdair MacIntyre’s concept of “practice” is a comprehensive one and transcendent of any particular culture. It is precisely in his distinction between the “codes of practices” (193) and the “core virtues” that are expressed within these codes that allows virtue to be independent of particular actions. While MacIntyre uses the morality of lying in different cultures as an example of how “codes of practices” vary from virtue (192-193), the issue of slavery (whether the American form or any of the others – African, Asian, Native American, ancient or early Modern) is an interesting one. Is there such a thing as a virtuous slave owner? (Aristotle should hope so, since he owned slaves). Is there such a thing as excellence in the management of slaves? While we might acknowledge excellence in the “practice” of the management of employees, is there any “internal good” to be gained from excellence in the management of slave labor? Or is it just inherently wrong? MacIntyre addresses the question on page 199, asking if some “practices” are inherently evil. He allows for the possibility, while confessing to be unable to find any examples (200). For him, the problem with the possible candidates for an evil practice, torture and sadomasochistic sex, is that he finds them closer to a “techne” than a “practice”. But plantation management in the antebellum South is complex enough to be a “practice”, and the social element is fulfilled by the community of wealthy slave-owners. In that historical context a slave owner could strive for excellence in the management of his plantation, judging himself according to objective standards shared by other plantation owners, and exercise “core virtues” in pursuit of an “inner good” (the wealth and status accrued from his exploitation slave labor being secondary), and be simultaneously virtuous and evil. In fact, the description might apply to Thomas Jefferson, except by all accounts he was a very poor manager of his estate, dying in near bankruptcy.

Reference:

Alasdair MacIntyre. After Virtue. Notre Dame, IN: University of Notre Dame Press, 1981.

Epistemologically, if you spend all of your time analyzing a discourse without reference to the original subject of the discourse, you run the risk of remaining so highly abstracted from the subject of study that you get not closer to the truth, not closer to reality, but further away from it. This of course presupposes that you believe in such a thing as "reality". An alternative point of view denies that there is such a thing as "reality", or at least denies it is possible to know such a thing. In that world, all there is are discourses, and any one discourse is just a serviceable as any other discourse. But reality has a curious way of continually reasserting itself.

Said’s claim essentially that the Orient as anybody in the West knows it doesn't exist is an interesting one epistemologically. "I have begun with the assumption that the Orient is not an inert fact of nature." (71) The way this is phrased is certainly quite defensible. He elaborates "There were-and are-cultures and nations whose locations is in the East, and their lives, histories, and customs have a brute reality obviously greater than anything that could be said about them in the West." (71) So what then is the book about? "The phenomenon of Orientalism as I study it here deals principally, not with a correspondence between Orientalism in the Orient, but with the internal consistency of Orientalism and its ideas about the Orient… despite or beyond any correspondence or lack thereof with a ‘real’ Orient" (71). Said starts with the assumption that there is no such thing as "the Orient", though there are people and cultures in the territory traditionally so named, and then undertakes a study of how Europeans have constructed “the Orient”. So far so good. But there is a second part to Said’s thesis, a second set of assumptions that he combines with the first, namely that the entire construct of Orientalism is "more … a sign of European-Atlantic power over the Orient than it is a veridic discourse about the Orient." (72) This set of assumptions each need to be examined closely. The first idea, that "the Orient" is a construct, is fairly incontrovertible. And it would be very interesting indeed to study how the construct of "the Orient" does or does not correspond with the peoples and cultures living in the territory designated as such. But that is not what Said undertakes. Instead he attempts to prove his second assumption, that Orientalism ipso facto does not represent the people in the Orient, but is rather a reflection of – and even tool of – domination over the region. There is always a danger with this type of writing, where the author attempts to fit the evidence to the thesis. The danger is that in trying to make the point well they introduce their own set of distortions into the subject of inquiry. Said is explicitly trying to prove that all previous Orientalists were tools of imperialism. This may be the case, but he may be mishandling them at least as egregiously as he accuses them of mishandling the Orient. Whether this is actually the fact is beyond my competence to judge, but the danger ought at least to be acknowledged. And Said’s point would be made best if he acknowledged this inherent danger and took steps to mitigate it. However, he appears not to of done this, at least if the basic claims of his critics have any credibility.

References:

Edward Said. Orientalism. New York: Vinatage,1978.

I find Said’s application of Foucault’s concept of Discourse to be closely related to Thomas Kuhn's concept of Paradigms. Both refer to unconscious mental structures that both assist and limit human thinking. They it assist in that they create mental shortcuts, categories for rapidly understanding the whirling chaos of perceptions and impressions that constitute realities direct approach on our senses. However, the limitation comes from the fact that whatever doesn't fit the pre-existing structure is not even perceived, or if it happens to be noticed is explained away, so that whenever reality doesn't fit the theory, it is reality that is adjusted. I find it interesting that both concepts arise within a decade of each other, both representing a dawning awareness not only of what we think, but of how we think. And of course both are linked to the study of history, because it is only in comparison and with an understanding of how people thought in the past, that these paradigms/discourses shift over time, the but they are even noticed. In both interest in both instances, when you are thinking within a paradigms/discourse you are not aware of the fact. And, paradoxically as soon as you transcend your current paradigm, you are simply entering another one (though the new one, being fresh, may not be so clearly defined). As such, Said’s Orientalism is itself a Discourse, a way of looking at things that was at the time of its publication revolutionary, and then became popular. But that's not to say that it has an exclusive claim to accurately represent reality. More classical Orientalists have a different point of view. Robert Irwin in the introduction to his book Dangerous Knowledge (2006) observes that, "Most of the subsequent debate has taken place within the parameters set out by Edward Said. Much that is certainly central to the history of Orientalism has been quietly excluded by him, while all sorts of extraneous material has been called upon to support an indictment of the integrity and worth of certain scholars. One finds oneself having to discuss not what actually happened in the past, but what Said and his partisans think ought to have happened.” (4). You have, in essence, discourse versus discourse, paradigm versus paradigm.

Said makes a two-part claim about Orientalism. The first is that Orientalism constitutes a discourse, in the sense that Foucault uses the term. The second is that this discourse, this network of interlinked ideas, is inextricably linked to the history of political domination over the region known as the Orient. I can accept the first claim is fairly obvious, and to the second claim I would gladly grant a degree of influence, but the inextricable part is the one I wonder about. You can it really be true that no observations in 150 years of study of the Middle East are accurate enough to stand independent of the fact that the observer belonged to a dominant political structure? While the introduction portion of the 2003 edition of Said's book happily recounted the reactionary criticism that the first edition received when it was first published, criticism that was so ill considered that it's hardly worth taking seriously, the book is now 25 years old, popular, and well-established. In that time various points have been disputed, some more effectively than others. I quoted Robert Irwin above, who wrote an entire book on the subject. It seems me the most cogent arguments against Said’s thesis are those that go after the second part, the claim that the history of imperialism is inextricably linked to all thoughts about "Orientals". Influence is surely present, but thoughts about, say, the development of the Arabic language could conceivably come from a study of the language itself independent of whether your people occupy an Arabic-speaking territory, or your territory is occupied by Arabic speaking peoples.

References:

Edward Said. Orientalism. New York: Vinatage,1978.

Robert Irwin. Dangerous Knowledge: Orientalism and its Discontentents. New York: Overlook, 2006.

Thoughts on Nancy Fraser's essay "From Redistribution to Recognition":

I
Why is it that for academics, the solution to any problem is “a new critical theory” (69)?

II
It is an interesting starting point that requires the assumption that “both redistribution and recognition” (69) are necessary in a theory of Justice. This may well be the case, but I would like to see on what grounds she establishes this.

III
Fraser's view on the economic consequences of gender division within society seems a little bit abstracted from the real world. Certainly she identifies real problems. But she moves all too quickly to the conclusion that "gender justice requires transforming the political economy so as to eliminate its gender structuring” (78). This seems like a worthy abstract ideal, but I can predict a number of practical problems with any attempt to implement it. Viewing social structure as the only reason why women are drawn to certain occupations (think kindergarten teacher) presupposes no biological basis for these affinities. Now don't get me wrong, not arguing that women are only fit to be kindergarten teachers, nor am I arguing that men can't make perfectly good kindergarten teachers as well (in fact I know a few). What I'm arguing is that the affinity for teaching kindergarten may be statistically more widespread among women than among men for reasons other than just social constructions. And if this is the case, then social justice is not achieved by ignoring this fact. And it also means that social justice is a lot more complicated than a simple prescription for eliminating gender as a social construction (a prescription that anyway contains no helpful program for how this could be achieved).

IV
Fraser's analysis suffers on the whole from working with the idea of groups, while leaving aside the fact that groups consist entirely of individuals. This seems to me the reason why her analysis is interesting, but contains nothing useful for how society ought to get to a more socially just level. I say “nothing useful” because her prescriptions are abstract and group related, full of ought’s and should’s about transforming structures, and seemed to leave out the very individuals whose job it will be to transform these structures. (Example: “The logic of the remedy... is to put race out of business as such”(80). Great idea, but who's going to do it and how?) She plays elaborate games with abstract concepts, but these possess only a tenuous relationship to reality, and ignore all the interesting complications and contradictions that make real life so messy.

V
Fraser's prescription for successful change, what she terms a "transformative remedy" (85), is economic socialism combined with "deconstructive cultural politics" (92). In net effect I suspect she is right. A decrease in economic inequality would make for a society better able to overcome differences. And inasmuch as "deconstructive cultural politics" means people respecting each other more, this is also a recipe for success. However, in practical terms the socialist societies that she refers to exist primarily in nations that are culturally homogenous to a much higher degree than the United States. And it is in these countries-France, Germany, the Netherlands, and Denmark-that the loss of homogeneity has coincided with a marked increase in social tensions. As immigrants become an increasingly larger part of these countries, the same class and race-based problems that have long plagued the United States also arise. The lesson seems to be, where cultural differences are small, equality comes more easily, and where cultural differences are vast, the equality is much more difficult. This is quite understandable and even predictable given what sociology is learned about the process of othering.

Reference: Nancy Fraser. Justice Interruptus: Critical Reflections on the "Postsocialist" Condition. London: Routledge, 1997.

Should individuals be required to sacrifice for the greater good? Sacrificing for the greater good is an interesting paradox. I would see that as one of the highest goals of morality, but never an obligation. That is, if anyone choose to sacrifice for the greater good than that is to be applauded to the highest degree. But should someone choose to sacrifice someone else for the greater good then it may be necessary, but remains immoral, because it impinges on the freedom of another. Such decisions may be required in politics, but then very few would argue that politics is an occupation you enter to perfect your morality. Real life involves dealing with less-than-perfect people in less-than-perfect situations, and often the best action or policy is itself less than perfect, though it remains the best choice. Moral philosophy can be a guide to action in politics, but it can also be an impediment if actors feel that each action they take must be morally pure.So no, individuals should never be required to sacrifice for the greater good. If it is not a free choice, then it is not a sacifice, anyway.

The "right of exit" that Kwame Anthony Appiah mentions in The Ethics of Individualism is the escape valve for any abuse identified as being caused by a group. It is a "workhorse" because it is trotted out frequently as the primary protection individuals have against abusive groups in moral schemes where groups are given wide privileges over their members. If the group asks you to do something you not comfortable with, all you have to do is exit the group and its mandates morally no longer apply to you. Appiah further identified the practical difficulties with exercising this right, since we all depend on society to some degree or other, so exit is not always easy or even possible.

The "right of exit" workhorse is only necessary if you're trying to reconcile the rights of groups with the rights of individuals. But to me that's like trying to compare apples and oranges. A group is not an entity directly comparable to an individual. A group is an entirely different class of moral object from an individual. Groups have a lot of characteristics, and one characteristic of groups as an entity is that their characteristics are always hard to define, hard to pin down. If you want to define an ethics that is applicable with clarity, then it is best to work with clearly defined entities. The individual is such a clear and unambiguous entity. "The group" is a really fuzzy entity.

Whether the group is punk rock fans, or metalheads, or rappers, or other culturally defined affiliation groups upon which people in our society can build their identity, or whether it is something more ancestral, an identity built around an ancient religion or regional practices that have substantially greater weight of tradition behind them, it is always hard to find the boundaries, or even the core, of what makes that group distinctive. And the dilemma of the individual who must reconcile conflicting demands resulting from simultaneous membership in different groups is the basis of much great literature. In ancient Roman the conflict between the demands of family loyalty and those of citizenship that was a frequent source of moral commentary. In 17th-century Japan such a conflict with manifested between the demands of the code of the samurai and the new civil authority of the Emperor, as reflected in the 47 Ronin incident. And plenty of teenagers experiences in high school when friends in one clique make demands on your loyalty that conflict with those of friends in a different clique.

So it is almost the very nature of group identities that, one, individuals possess multiple identities, and two, that these identities will at some point conflict with each other. Thus groups will never have exclusive possession of any one individual. They might claim it, but they don't actually have it. And if groups can't speak exclusively and totally on behalf of all their members, to me that indicates again that groups are a different order of moral entities. That is why I can't conceive of an ethics that places the rights of groups (whichever group that ethics chooses to privilege) on the same level as the rights of individuals.

If we recognize groups and group identity as a secondary characteristic of human beings, and don't create moral rights based on obligations to groups as an entity (obligations to other individuals, who may form groups, may certainly be recognized, but not to the group as such) then the "right of exit" as such is actually unnecessary. The practical right to exit would instead be derived from other basic human rights that apply to all individuals.

Minority Languages redux

| No Comments | No TrackBacks

Following up on my previous post (How engaged should the government be in preserving minority languages?) I should note that Charles Taylor in his essay Multiculturalism spent a good deal of time on the question of minority languages from an ethical point of view. His primary example involved the implications of a law mandating French in Québec. Taylor identified a conflict between two principles, on the one hand principles founded in the view of fundamental human rights in the tradition of Kant, and another that focused on distinctiveness as derived from the work of Rousseau. In the individual rights tradition, a "standard schedule of rights" (52) is taken to be fundamental and primary, and universal across all cultures and subcultures. This then comes in conflict with "collective goals" such as preserving the French language and culture. The conflict arises in practice if not in theory, because preserving French requires restricting the rights of individuals to use English if that is their preference. This can be taken as a restriction on a fundamental right to freedom of speech or more broadly freedom of expression if the mode of expression of choice happens to be English.

My own sympathies lie with the individualist argument. Following a long tradition in Western ethics, I take the individual to be the primary moral unit and all collective traits to be secondary. This means I subscribed to what Appiah termed a position of "ethical individualism" (72) which he explains as "we should defend rights by showing what they do for individuals – social individuals, to be sure, living in families and communities, usually, but still individuals.” It is a position I first encountered in the moral philosophy of Rudolf Steiner, who wrote in the 1890s and called his philosophy one of ethical individualism.

I find the idea of the individual as a type of moral monad to be axiomatic, something that can be known with certainty. Group affiliations on the other hand are quite fuzzy and flexible. Individuals possess identities, alter them, discard them, and as join and leave numerous groups and thereby exist simultaneously in multiple categories, in ways that are virtually impossible to pin down. But an individual is one thing they always remain. Privileging groups thereby becomes problematic because the question of who is in and who is not in the group is not easily decided, and always subject to change. Further, privileging groups seems to necessarily disadvantage at least some individuals – usually those not in the group – in every instance. Privileging individuals, on the other hand, can only disadvantage people to the degree that their group affiliation causes them to feel disadvantaged. But that group affiliation is itself a secondary trait. So if the choice is a philosophy or policy that disadvantages individuals – a primary moral unit –on the one hand, or one that disadvantages some groups – a secondary characteristic – on the other hand, I would choose to disadvantage the secondary characteristics rather than the primary moral units. That is I would choose policies that disadvantaged groups over those that disadvantaged individuals.

Should the state preserver minority languages? This is a question that ethical philosophers have been discussing a lot over the last twenty years. Those who are for government involvement focus on the communal aspects of identity and the role language plays in maintaining it. They argue that each sub-group’s language should be preserved, and even promoted. The argument from the other side focuses on the inhibitions to integration that a separate language presents. The issue is often raised among Latino parents upset that they sent their children to an American school and yet their children did not achieve full fluency in English. Such sentiments were some of several divergent opinions behind California Proposition 227 which banned bilingual education in the state, and was passed by a majority of voters in California in 1998 with strong support from several segments of the Latino population. Those who supported proposition 227 did so for various reasons, but the reasons mentioned by supporters in the Latino community were that the educational bureaucracy had so entrenched bilingualism that a student could graduate from high school having been instructed in Spanish all the way from kindergarten. Supporters of bilingual education felt this to be a good thing, detractors – both Hispanic and pro-English whites – thought it terrible. Those opposing bilingual education pointed out that a non-English speaker in 21st-century America was automatically disqualified from a large number of jobs, and especially the better paying ones. So with proposition 227 we have a concrete example of a state government becoming directly involved with the language. Clearly there are pros and cons on both sides.

It is questionable whether the government can somehow avoid any influence on the issue of language. It seems any policy, especially in education, will have will have an effect one way or the other. Research has shown that bilingual children achieve the best long-term educational outcomes when they have their initial reading instruction in the language in which they have the largest speaking vocabulary. Once they have mastered basic literacy in their primary language, it is then far easier for them to transfer the skill into a different language. Concretely then, a Spanish-speaking child should be instructed in reading first in Spanish, and once having mastered the basics of literacy can then be taught English as a second language. Outcomes are better than for children who struggle to master reading in a language which they can barely speak. So in that sense California Proposition 227 was actually detrimental to the education of non-native speakers, although in practice bilingual teachers in many classrooms have been allowed by school district policies to impart the basics of literacy in the student’s native language first. Other practical problems include the fact that if a non-English speaker studies in a classroom full of native English speakers, they will pick up English fairly quickly. But if a non-English speaker finds herself in a classroom full of other non-English speakers, she will all pick up English far more slowly. And if the class teacher is bilingual but a non-native English speaker and not a very good one, you can see how students could go from kindergarten through 12th grade and never managed to master English even though they were studying in California. [In case anybody is wondering how I know all of us, my wife is a bilingual teacher and reading specialist in California.]

Hispanics are not the only non-native English speaking minority with ESL problems in California. Similar issues exist in Armenian neighborhoods, as well as in neighborhoods with concentrations of people from various Asian countries, such as those with Chinese speakers, Cambodian and Thai speakers, and Vietnamese.

Beyond full languages is the question of dialects. Certain forms of English are not generally accepted among mainstream whites as “normal”. The city of Oakland in California became the subject of much ridicule when it attempted in 1996 to institutionalize a dialect of English (though they called it a full West African language) common among African-Americans, which is termed Ebonics. (You can read the school board’s resolution here: http://www.jaedworks.com/shoebox/oakland-ebonics.html ). The Oakland school district wanted to treat Ebonics as a foreign language, and black students as non-English speakers. The only thing more interesting than the backlash was the fact that research, both in the United States and Europe, solidly supports treating dialects as foreign languages for the purposes of literacy instruction. Both England and Germany have strong traditions of regional dialects so distinctive that they are unintelligible between regions, as well as official versions – Queens English in England and “High German” in Germany – that were taught in schools and became the common language across regions. (I believe France also has a tradition of strong regional dialects; in Slavic countries the dialects became independent languages – Serbian, Ukrainian, Bulgarian, etc. so that Russian does not have a tradition of dialects in the same way. I’m sure there are examples from other parts of the world as well.) What research supports then is treating Standard English as a non-native-language when instructing speakers of a dialect such as Ebonics.

But treating a dialect as a foreign language for the purposes of teaching Standard English is one thing. Instructing students in the dialect with no intention of introducing Standard English is another. And it was not Oakland’s recognition of Ebonics as a dialect that was controversial (that fact was already well established among linguists) it was Oakland’s intention not to even try to teach Standard English that attracted so much attention. Which gets back to the issue of public policy and minority languages. Should the state encourage or even imposed standard English on students in the public school system? The ideal behind such a proposal would be a mainstreaming of what are currently distinctive cultures so that over two or three generations the culture gradually disappears, much the way Irish, Sicilian, and Polish cultural heritages are today largely insignificant among those whose parents emigrated to the United States between the 1880s and the 1920s. Others see a terrible loss in cultural heritage that starts with language, and hope to preserve the uniqueness, and also the separateness, that comes with a strong non-mainstream language and cultural identity. Which option is preferable? I suppose it depends on the outcome you are seeking. If you want people to be equal culturally and socioeconomically then it helps to emphasize and reinforce similarities, including language. But if you want people to be free and distinctive, then you would find it important to preserve language as the basis of identity.

Mandating instruction in Standard English inevitably results in a gradual diminution of distinctive cultural identity, though the process usually takes a few generations. Whether this amounts to a suppression depends on the intention behind it, and whether it is desired or resisted by those subjected to it.

In "Justice as a Larger Loyalty" Richard Rorty argues that Justice is simply a sub-category of loyalty. Skipping his main argument, I started thinking about his examples and how he went about discussing the issue.

The problem with hypothetical examples for moral dilemmas is that they invariably oversimplify the situation. It trying to highlight the dilemma, they posit a few facts, and then ask the reader to consider what they would do. But what any of us would do is ultimately dependent on a far greater range of data than the hypothetical example can provide. And, of course, it also tends to assume that our actions are largely the result of our moral reasoning, something that should not be taken for granted.

Two examples in Rorty's essay include that dilemma of a family after a nuclear holocaust who now shoot their neighbors to preserver their own dwindling food supplies, and the classic lifeboat dilemma. As to the example of the family after a nuclear holocaust, the problem is can be considered from several angles. For one, everyone is going to die anyway, so why worry about whether you can feed your family for two extra days. This is the real problem with hypotheticals: as models they are always incomplete. There are always factors that the model excludes. For example, someone once asked me (knowing I am married), “If you could sleep with another woman and no one would ever find out, would you do it?” The expected answer is, “… well, if no one would ever know…” However, there is another dimension to the issue. I would know, even if no one else ever did. And this is not insignificant. So back to the nuclear holocaust example: food is limited and family and neighbors all want to eat. The choices are: fight off the neighbors in the name of family, or share with all and run out sooner. Given the bleak situation, I question whether the moral course is to put a black spot on your soul by killing your neighbors for the sake of your kids, or to go out with a clean conscience, knowing you did best for everyone. The radiation will probably get you before the hunger.

Move the scenario to a life boat on the open ocean, and the problem of hypoteticals again shows up. There are simply too many unknowns. You don’t know how long until you will be rescued, if you will be rescued, whether you will be able to catch fish, and where ocean currents will take you. And anyway, dehydration is a bigger problem than malnutrition. To kill off the strangers so as not to have to share food and thus provide better for your family sets a terrible moral example for them, and they will have to witness the deed. Then if you all survive, you will never know if you could have done things differently and brought everyone through.

Finally, as a principle, looking out for those closes to you does not create a society I or anyone else would want to live in. So if it doesn’t serve well under normal circumstances, how is it any better in extraordinary ones? Once you accept the expediency excuse (desperate circumstances call for desperate measures) then you have the slippery slope problem: when are circumstances desperate? Nuclear holocaust is remote, but what about unemployment? Is that desperate enough to justify stealing food (my family needs it more than the store proprietor)? Where is the line?

Rorty, Richard. "Justice as a Larger Loyalty."  Justice and Democracy: Cross-Cultural Perspectives. Ed. Ron Bontekoe. Honolulu, HI: U of Hawaii P, 1997. 9-22.

About this Archive

This page is an archive of recent entries in the Philosophy category.

History is the previous category.

Steiner and Waldorf Education is the next category.

Find recent content on the main index or look in the archives to find all content.