I believe I was in college when I first had someone tell me I shouldn’t use the word ‘Indian.’ I had certainly heard plenty of critical commentary about Christopher Columbus, and at least some of that commentary had included a remark or two on the absurdity of applying the word ‘Indian’ to the indigenous population of the Americas. Still, in the lily-white neighborhoods of my upbringing, this word became just another absurdity in a world that already had plenty of them. So, when my Navajo classmate, Wendy, expressed a clear preference for ‘Native American,’ this was new. What was new about it wasn’t the critique of the word ‘Indian’; it was the sense that the critique mattered.
I wish I could say that I responded appropriately, but I’m afraid I can’t.
There was whitesplaining; let’s just leave it at that.
Admittedly, the rest of this post could qualify as more of the same. I hope not, but we’ll see…
I’ve heard a couple of interesting theories about the origin of the term, ‘Indian,’ but I’m not sure that any of them have really nailed down the concept. Origins are not the only rubric by which we might assess the meaning of a term, and folk-etymologies are infamously inaccurate, so the whole question of where the word came from has to be taken with a grain of salt.
The notion that Columbus thought he was in India is an incorrect correction, at best. Columbus thought he was in the East Indies. That may sound like a fussy point to make, but folks ought not to point out one mistake only to land on another. Somewhere in his work, the historian of religion, Sam Gill, suggests that Europeans used term ‘Indian’ as a kind of catch-all category for everyone who lived east of the Indus River. By this account, the problem with the term is not so much a clear factual error as a kind of vagueness, that and a kind of projection of the European imagination into new territory. It’s not at all unlike those associated with ‘orientalism’ in other historical contexts. Another interesting take comes from the noted activist, Russell Means. According to Means, the term originally meant “‘under God,’ thus making it an accurate observation of the spirituality of America’s indigenous peoples. At a time when many were switching from ‘Indian’ to ‘Native American,’ Means embraced ‘Indian,’ even insisted upon it. Of course, this may have had something to do with branding. Means was of course a long-time member of “The American Indian Movement (AIM),” which might have given him a little extra reason to hold onto the label. In the end, it seems that most of the indigenous peoples of North America, have shifted to ‘Native American,’ and along with them, so have the bulk of those seeking to support indigenous peoples or simply to show respect. Mileage always varies, but ‘Native American’ seems to be the norm at this point.
I am occasionally reminded that there is at least one problem with ‘Indian’ that “Native American’ does not solve, that is the vagueness of such a catch-all term. This vagueness facilitates a range of problematic thinking. For example, I lost track of the people who asked me if I lived in a teepee while I was living on the Navajo Nation. The Navajo people had never lived in teepees, but the imagination of the American public (and the world at large) often puts them in teepees for the same reason that it put so many peoples from the great plains in Monument Valley for so many classic westerns. To the public at large, an ‘Indian’ is an Indian, and because we can use the same word for so many peoples they think the word must tell us something about them. That the term is really little more than a default category for a broad range of people whose customs were poorly understood when the term was coined doesn’t seem to enter folks thinking, at least not without first giving them a verbal shove in the right direction. Still, to the degree that this is a problem with ‘Indian’ that problem is not much improved by saying ‘Native American.’ Since I began focusing my Native American studies in grad school, I have had a couple friends and family ask me what “Indians believed” about topics like God, reincarnation, or the afterlife in general. Today, I am sometimes asked what ‘Native Americans’ think about the same topics. I often find myself responding to these questions by asking which tribe? Others might ask them why they are asking these questions of a white guy? In any event, the problems with such questions are not much improved by the change in vocabulary. Whichever word we might use, the question assumes implications that just aren’t there.
I happened into an interesting illustration of the problem one day while surfing travel blogs. One of these had a lovely account of a couple’s visit to the National Monument at Little Bighorn Battlefield. Their account was thoughtful and respectful, and I do not mean to direct negative attention their way (and in any event, I can no longer find it, hence the lack of a link), but one thing about their post stuck out in my mind. They made a point to say that their tour guide had been a student at the nearby Little Bighorn College, a tribal college, so they had gotten “the Native American point of view” on the battle. (I believe I got the quote right, but in any event, that was certainly the gist of it.)
When people address the significance of the Battle of Little Bighorn (or Greasy Grass) to Native Americans, they are usually thinking in terms of those who fought against Custer and his troops. That would be Cheyenne and Lakota for the most part, (though there were some Arapaho in the village too.) I can’t help but think, those who read the blog in question will naturally think the “Native American” perspective mentioned in the blog will reflect the point of view of those peoples, but Little Bighorn College is on the Crow Agency, and the student in question was very likely Crow. In fact, his or her ancestors may very well have included some of Custer’s scouts. To the degree that his or her native identity may have shaped the story these bloggers heard, it is unlikely that it was shaped in the manner most readers would have imagined.
Now, I certainly do not mean to suggest that a Crow’s perspective on the battle of Little Bighorn should weigh less than that of a Cheyenne or Lakota, not in the slightest. What I am suggesting is that the difference in this case matters. There is a difference between the perspective of someone whose ancestors fought against Custer and someone whose ancestors allied themselves with him. That difference is easily obscured when using terms like ‘Native American’ or ‘Indian.’
…which reminds me of one discussion I had about these issues with my own students at Diné College on the Navajo Nation many years ago. Fed up with my efforts to problematize every term available for the indigenous people at large, one of my own students just asked; “How about Diné?”
…which got us to the end of the lesson about 15 minutes early.
Don’t get me wrong; there are no magic solutions to any of these problems, but some words help us more than others. There are many contexts in which words like “Indian” or “Native American” are tough to avoid, but when you know which specific people you are talking about, it is almost always better to name the indigenous community in question.
I am continually amazed at the faith some conservative Christians place in the authoritative pronouncements of a single ancient book. No, not that one. I am talking about the dictionary.
…pardon me, ‘thuh dictionary’.
I have long since lost track of the number of times someone has told me what this or that word means according to ‘thuh dictionary’. It could be any word, but frankly, the most common ones to land me in front of the court of lexicography are ‘homophobia’ and ‘atheism’. Significantly, I don’t think many of the people who launch into this sort of dictionary-whinging gambit have even looked up the words they hold court over. If you ask them which dictionary, they will often tell you ‘Websters’, as if that meant a damned thing!
Dictionary-Whinging (Sorta verb-like, but more gerundy) Pronounce the g like a j, dammit. It means being a jerk, but a certain kind of jerk. …a jerk with or over a dictionary.
Sometimes folks will invoke the power of ‘Merriam-Webster‘. This at least is a real entity, a branch of Encyclopaedia Britannica, so that will at least tell us something about the source, but it doesn’t do much to pin down the book in question. As to the name ‘Webster’s’? That is in the public domain. Anybody can publish a Webster’s Dictionary. You, me, the homeless guy down the street could write out a couple definitions and call it a “Webster’s Dictionary” Hell I could translate my cats noises and call it a Webster’s Dictionary.
In fact, let’s do that!
WEBSTER’S COLLEGIATE DICTIONARY OF ARCTIC CATISMS
Compilerized and Authoritated by Daniel S. Dammit
July or maybe December, 2013.
Editorial Staff: Fido, Junkmail, and Auto-Kitty
Mrrour: Pet. (e.g. ‘Pet me please!’)
Me’a’our: Pet, used with a sense of urgency (e.g. ‘pet me now dammit!’)
Mmmmmeuurrrrrr: Pet, used in a polite way (e.g. “If you have a moment, could you please pet me, …and perhaps change teh literbox. …no hurry.”)
Meow: Ironic Usage. It means; “I don’t actually sound like this human. You are imagining things.”
Meeeh!: Head-Butt (e.g. I’m lonely, human. Please head-butt me this very instant.”)
There. That’s my Webster’s Dictionary. Suck it Lexi-Judges! You have to use that now when you interpret cat. I has spoken.
(Aside over. We now bring you back to your regularly scheduled, cat-free, post.)
The bottom line is that a significant portion of people citing the authority of ‘Webster’s’ are simply bluffing. They haven’t looked anything up, much less thought about it. I guess they figure the meaning of the words in question is so obvious to anyone but the idiot they are talking to (which if often me) that there is no real need to consult the authority of the imaginary lexical-judge; it goes without saying that this good Justice will back their own understanding. I guess the spirit-filled just know what thuh dictionary would say.
It seems to me that some people look upon a dictionary as a judge of sorts, or maybe a legislature, both if they get their way, but most of these folks are happy to admit that ‘thuh dictionary’ is not an executioner. No, that is a role they hope to play themselves.
One thing I find quite amusing about all of this feigned dictionary-deference is that it always works best with the really bad dictionaries. You see a good dictionary will include a number of entries spelling out a variety of different uses of a given term, but the ideal dictionary for the the lexical authoritarian contains just one reference for every word. So, on the off chance that he actually bothers to look anything up, our dictionary-whinging fellow is not going to want to bother with anything resembling choice. He wants a single entry, and (Webster’s willing) he will present that single entry as clear and convincing proof that the word in question has just that one proper meaning. …thus effectively turning the weaknesses of an incomplete dictionary into a virtue. For these purposes Dictionary.com will serve the vocabulary fascists much better than the Oxford English Dictionary. Webster’s Third New International would be right out, …at least it would be if such folks knew enough about dictionaries to realize what that infamous source of lexical permissiveness contains.
Which brings me to a second point of amusement about the art of dictionary-whinging. Its practitioners seldom (perhaps never) understand how dictionaries are actually made. They haven’t studied lexicography, and they haven’t even read the methodology section of any given dictionary. Most, probably don’t even know that such a section exists; it fits in those automatically skipped pages at the beginning of the book they aren’t actually reading anyway. These folks certainly haven’t read Samuel Johnson’s preface to his Dictionary of the English language or any other thoughtful discussion of the topic. If they did, the first thing they would find is that lexicographers generally don’t work the way they think they do, and they don’t intend their dictionaries to be used the way they think they do.
Now let me give you a minute to parse all the ‘they’s of the last sentence. Wait a minute! I sense a new volume of Webster’s coming. Here it is:
WEBSTER’S NEW NOT-SO-COLLEGIATE DICTIONARY OF DANOLOGICAL THEM-ITUDE
Compiled on a Lark, 2013
They2: Those people.
They3: Them other guys.
They4: They (like I sad …dammit!)
They5: I obviously don’t get laid often enough.
Anyway, my point is that with the possible exception of early editions of the American Heritage Dictionary, lexicographers are not legislating and they are not adjudicating language. They are informing us about common usage. In short, their approach is descriptive rather than prescriptive. This is exactly NOT the approach that those seeking to use dictionaries authoritatively would wish it to be.
To put it another way the judge in this instances refuses to do his job as the dictionary-whinging bastards of this world would have him do it. What dictionary-makers consistently seek to do is provide us with a responsible account of the way language is actually spoken and the meanings of words that people speaking a given language actually use. What the dictionary-whinging types consistently want is an authoritative pronouncement delivered from on-high about just what meanings we SHOULD be attaching to any given word. They want the dictionary to tell us how to use language.
That really should be end-game folks. When the judge doesn’t adjudicate; it oughtta be case-dismissed, but that is almost never the case. Pretty much every one hitting me over the head with an imaginary Websters will just go right on doing it after they have just been shown that their weapon of choice is not really meant to be used that way.
My rant began with a reference to religious folks though, didn’t it? Okay, it did. To be fair, this is a post I dropped several months ago and just picked back up. The sticking point was just that. Do I really want to talk about the general misuse of imaginary lexical authority? Or do I want to explore the specific role of such practices in the thinking of pious people. Tonight, my solution is this. I will make just one point about the religious variation, and that is this:
It is sort of fitting to find that people who wish to approach life as though it must be lived according to a specific set of directives from on-high would replicate that model in their approach to language. This is the prescriptive life well lived. They find an ought-to in every decision and an essential meaning in every word, all hard-wired right into the universe itself. Actual language use then becomes a set of cases fitting clearly into categories of right and wrong, just as anything else one might do in a world defined by an ultimate Legislator. The deus ex machina that some folks look for in a dictionary is thus pretty much the same one they commonly proclaim outright in their other book-weapon of choice.
I could believe in a God of Korean BBQ, yes I could!
I Grumble: I wish I had a nickel for every time a Christian told me that my take on the existence of God isn’t really atheism; it’s agnosticism. No, those nickels wouldn’t make me rich, but they would add up to a nice meal at a decent restaurant, and with enough change to leave a damned good tip.
On one level, this is interpersonal aggression. If someone can take your identity away (or at least that part of your identity most salient to the topic at hand), then the rest of the discussion is going to suck no matter how well you handle the particulars. It’s the sort of argument that is really about who is in charge.
…and I mean in a right-here-and-now kinda way.
Just like a husband and wife engaged in a two-day spat over which brand of butter would have been a better purchase, atheists and theists (mostly Christians) will tap away on our keyboards well into the wee hours of the morning, all over the question of just what atheism really is and who gets to call themselves an ‘atheist’. It’s almost as though we have that agreement, you know the one about never going to bed with unresolved issues, only we never do get to the make-up sex on this particular topic. We just keep jabbering at each other until the sun rises and it’s time to go to work tired. (Thanks honey!) The bottom line is what ought to be the opening stages of a larger dialogue becomes the overwhelming focus of an exhausting (and often pointless) pseudo-discussion.
On another level, the subject is certainly worth some time. The semantics are tricky here, and one will need to sort the meaningful possibilities out before proceeding to any substantive issues. And Hell, I figure I’ve encountered a genuine concern or three amidst all the bunk believers have thrown at me on this issue over the years. I know I have a few truth-in-advertizing concerns for those calling themselves Christians as well. Plus, I think I’m actually adjusting my views on this one a bit lately. So, I’m going to have a go at this all-too familiar old topic and hope that the results won’t lead to any incidents of self-mutilation.
So, please take a deep breath!
The Basics: The problem is this, among the group of people calling themselves atheists, some of us will happily do so without presenting any reason to believe that there are no gods. If pressed on the issue, we will often claim that the burden of proof lies with the believer. Atheism thus represents a stance we will take in the absence of positive reason to believe in God. This approach to atheism is sometimes known as “weak atheism,” as opposed to “strong atheism,” which is generally taken to refer to the stance of someone prepared to argue that no gods exist at all. Some might say that a weak atheist simply doesn’t believe in any gods whereas a strong atheist says there are no gods.
And here is where Theists often cry foul. Isn’t the neutral position really that of agnosticism, they will say, and how can it be that atheists (weak or otherwise) have no burden of proof? Isn’t that unfair?
But of course atheists have a number of arguments in favor of these terms, not the least of them being an analogy to legal reasoning and/or the structure of formal debate organization wherein an affirmative position is often given the burden of proof. If someone is accused of a crime, we do not expect the defense to prove them innocent; we expect the prosecutors to prove them guilty. The problem, as weak atheists often phrase it is that you cannot prove a negative. This isn’t quite true, or even close really; but it does touch on a real problem. Many negatives can be proven true, but many cannot. If for example the original claim to be disproved is too vague, it will be difficult to formulate grounds for proving it false. Making someone responsible for proving a negative thus creates a double-bind of sorts, making the critic responsible for any ambiguities in the position he seeks to criticize.
The weak atheist position construes this debate in terms of a proof that at least one God exists. If the theist can make his case, then great he wins, but if he fails, then we go back to our default judgement that no gods exist.
Theists typically reject these terms of debate, often by suggesting its proponents have mislabeled themselves. ‘Atheism’, they will suggest should be reserved for those prepared to prove god doesn’t exist and those who merely assume he doesn’t in the absence of evidence are better described as ‘agnostics’.
It is actually a rather soft version of agnosticism that theists keep advancing as the proper alternative to the weak atheist position; effectively telling us; “if you don’t know, then leave it at that.” The shoulder-shrug version of agnosticism is not to be confused with hard agnosticism (i.e. the notion that questions about the existence of God are inherently unknowable, in short; “I don’t know, and neither do you”).
Of course soft agnosticism could be a perfectly reasonable description of the absence of affirmative belief, but so would weak atheism. In fact, the two categories could well apply at the same time. …hence the common practice of referring to oneself as an agnostic atheist.
Many do just that.
Holy Holistics Batman! It’s worth considering that such labels go well beyond the stance one takes in a particular debate and extend to questions about behavior, values, etc. Life is full of decisions one has to make in the absence of perfect information, and this is one of them. Sooner or later we have to make decisions predicated on our answer to questions about whether or not God does exist. I will either keep the Sabbath or not; I will either say the Sinner’s Prayer with conviction, or not. I will either covet my neighbor’s hot wife or not. …you get the idea. If the debate over whether or not God exists ends in a stalemate the actual pace of real life decisions does NOT respect that stalemate (and from what I hear, neither will the God of Abraham). Whatever the balance of evidence, one has to make a decision. This is exactly what burdens of proof are about. Assigning a default judgement is a process of deciding what you will do if you do not know the answer to a given question.
The weak atheist position may be frustrating as Hell to theists, but it has the virtue of addressing this question of how one will actually live.
Let’s Take a Step Back: There is just one thing about that last twist in the argument above; it isn’t quite a function of logic or reason, …not entirely so anyway. Rather, it is a question of how the merits of a reasoned position will map onto the practical judgements of actual life.
Default judgements lie at the intersection between reason and social interaction, and the question of who has the burden of proof in this debate is just one of the moments when the politics of religion intrudes on the intellectual exercise of reasoning about it. However much the participants may want to imagine themselves capable of resolving the issue on the merits of the case, the prospect looms large that it will still be an open case long after any particular discussion (or even years of study and centuries of dialogue). It would be nice if someone could produce end-game proof one way or another, but the reality is that most of us will end up making our decisions about a range of relevant issues in the wake of a stalemate shaded by a little other than a sense that one side or another has a good point here and a slight advantage there. In short, the debate may never end, but sooner or later we have to declare our own take on the issue. At that moment, when we have to decide in the absence of a clear accounting, the burden of proof may well prove to be the decisive consideration.
And so we haggle about the terms of the debate even to the point of never getting to the debate itself, partly because we know this little technicality is likely to make a difference on down the road a bit.
Whatever else weak atheists are saying, they are also saying “let’s handle this issue one God at a time. You give me one sound case for one God as you define Her, and I’ll give up my position and go with that one God.” This position offers real advantages for both parties, not the least of them being that it bundles all the tricky semantic questions about what one means by ‘God’ into the same package and lets the Theist have first crack at resolving them. The details of the discussion will then be on her terms (or at least about her terms).
This has the advantage of providing for a pretty direct test of that God, at least for those willing to approach the subject by means of reason (which is admittedly a diminishing portion of the population …it having become an article of faith that religion is about faith). In short, this approach to the conversation maximizes the relevance of any conclusions drawn to the actual beliefs of the Theist involved in any particular discussion.
But what about the atheist? For him, this way of modelling the issue really tests a pretty narrow aspect of his professed stance; his ability to present a reasonable objection to one particular approach to belief in one particular god, …at least as argued by one particular person. It leaves his take on any other gods pretty much off the table altogether. And (here is where I am cutting against years of habit) I think there is some justice to the claim that this is something of a dodge.
If someone has concluded that there are no gods, or even that he sees no reason to believe in any, then even this latter version of his stance necessarily goes well beyond the subject of one debate with one believer. It’s a fair question; what about the others? How do you deal with them?
Those professing weak atheism are generally unwilling to enter onto that turf, not the least of reasons being that any attempt to produce an end-game argument on the subject will effectively make them responsible for resolving all he tricky semantic questions while theists stand-by with an easy out. If an atheist attempts to prove that all gods don’t exist; he has to settle on a definition, and he has to do it without a claim that that definition fits the real thing (since he doesn’t think there is a real thing). The mistakes of believers thus become the responsibility of the atheist, and the liar’s paradox then mocks his every move.
And yet, there remains some trace of a legitimate question here. Does the stance of even a weak atheist not go beyond the particular gods of the particular theists with whom he is talking at any given moment? Clearly, he expects to reject any given god with whom he he is confronted at any given time. If that expectation does not yield a direct argument on the topic, is there no accounting for it whatsoever? None?
At the very least we could frame the conclusion that there are no gods as an induction of sorts, derived from our past experiences debating the existence of particular gods with particular people in a variety of different conversations. At some point, one begins to form an expectation, even a tentative conclusion. The judgement is there, and one can even find ways of framing it for purposes of discussion. It’s just that the conversation gets kind of messy if you go this route.
But maybe that’s a mess more of us ought to consider getting into.
Let’s Wrap it Up (and it’s About Time!): The issue here isn’t really what kind of atheist are you; it’s what kind of conversation do you want to have? How do you prefer to frame the debate? And the truth is that most of those professing weak atheism do in fact cultivate a number of alternative approaches to the subject; they just don’t recognize them as appropriate answers to questions about the existence of God or gods. This happens precisely because the conversation must at some point cease to be a question of metaphysics and become a question about social practice.
Ultimately, the judgement that there are no gods has less to do with the nature of the universe than the value of certain ways of talking about it. It is a judgement that god-talk never has nor ever will produce a description of a superntural entity that is literally true. On a good day, god-talk might produce inspiring poetry, amazing architecture, profound moral thoughts, or even deeply moving personal narratives, but it will not produce a plausible case for a supernatural entity. Even the assertion of a weak atheist stance means at least this much; that one does not expect to hear talk of gods produce a believable claim about the existence of such a being. One may prefer to test that one god at a time with the Theist on the hot seat, but those of us claiming the label are certainly communicating something about our expectations regarding the subject at hand.
We can do more than that, and we actually do more than that every time we comment on the realities of religious practice; every time we describe the horrors committed in god’s name or link any poor judgement to the vagaries of religious thought. This sort of talk doesn’t always rise far above the level of gossip (or even outright idiocy), but it often calls attention to real problems. At least part of the rationale for rejecting belief in God is a sense that talk about him is unlikely to produce a claim worth affirming, at least not in its most literal sense. (Some of us may find Martin Luther King Jr.’s words inspiring or even turn the radio up for a religious tune or two, but there is always some sense in which we are not quite down with the whole message.) And herein lies the moment when even a ‘weak atheist’ goes a little beyond the confrontation with any one case for God; he is pronouncing a verdict on a vast range of discourse about gods, and he is telling us that all of it (in his estimation) fails to produce a compelling case for belief in that God. In some instances the God is too vague, in others She is a contradiction, and when a clear and coherent concept does make an appearance it just doesn’t have the ring of truth to it. This is a judgement that goes beyond the test of one particular god belief, and weak atheists make these sorts of judgements on a pretty regular basis.
So, it isn’t really that we have two types of atheists here so much as two (or more) different ways of setting up a discussion with theists over the subject. One typically uses the deductive models of metaphysical reasoning to test one God at a time (preferably that of the particular believer we happen to be talking to). The other typically uses probabalistic reasoning to pass judgement on a range of loosely connected ideas sailing under the rubric of god-talk. In effect, the second approach deals not with God Herself so much as the language in which she is typically presented, and it deals with that subject in terms of summary judgements. There is nothing inherently wrong with this approach, but it’s a bit less rhetorically satisfying, especially when squaring off over the subject with someone who insists that some version of God is real after all.
Most of us are uncomfortable with generalizations, and I think even atheists are oddly attached to the sense of absolute truth that one expects from metaphysical discussion. When we approach the topic that way, we can often say ‘no’ with something approaching certainty. It is the certainty of deductive reasoning and all-or-nothing proofs. Theoretically those are the stakes, the theist too could win one for the Gipper, …or Jesus, I suppose. If these are the stakes, then yes, I think I am still inclined to opt for the weak atheist position. But I do think it is reasonable to expect some accounting for the rejection that goes beyond the god of one particular conversation; that account will of necessity turn into a form of social commentary. And thus my rejection of god turns out to be a rejection of what men say about Her, and on that score perhaps there are sufficient grounds to field an affirmative argument.
I have very few classroom horror stories from my college days. Of course I remember a lot of petty behavior, some arguable decisions, and I witnessed at least one case of genuine abuse to a classmate, …okay two. But it was pretty rare that I personally felt any significant discomfort as a result of anything the teachers did in the classroom.
My statistics textbook took a Hell of a beating, but that’s a different issue. I liked that teacher. I just hated statistics.
But there was one really awful lecture that I remember in detail. Lucky you, dear reader, because I am going to share the misery.
It was my last semester in college and I was finishing up the credits for a second major, linguistics. In those days, the linguistics program at the University of Nevada, Las Vegas was interdisciplinary. So, I had taken plenty of classes in linguistic anthropology, sociolingistics, psycholinguistics, logic, philosophy of language, etc. …all really great stuff! I enjoyed every minute of it. But that did leave one really huge gap in the knowledge that a guy graduating with a degree in linguistics ought to have. I hadn’t yet taken a full course in grammar. I didn’t even need it to graduate, at least according to the degree requirements, but that didn’t sit right with me. How could I graduate with a degree in this subject without the benefit of a full course in grammar? I’d heard good things about the lady who taught grammar in the English Department, and so I signed up and prepared to get down and dirty in the realm of syntax.
I knew something was wrong when I found a middle-aged man standing at the head of the classroom on the first day. I do remember his name, but let’s just call him Mr. H. Mr. H. passed out index cards and asked all of us to fill in some personal information while he explained that the usual instructor was on sabbatical that semester. He would be teaching the grammar classes.
For the next few minutes everything seemed pretty standard. No red flags went up as Mr. H. reviewed the syllabus, and I felt pretty confident I was going to learn a lot in his class. I grew even more pleased when he explained that he would sometimes venture outside the narrow bounds of grammar to discuss other aspects of language use.
It was as though he had promised to have strippers pass candy out during class.
I couldn’t wait for some of those discussions. Luckily I didn’t have to, as Mr. H. proudly announced his first slightly-off-topic lecture for the semester. He wanted to talk about euphemisms.
I was a happy guy.
He began by telling the story of his first job, working in a mom&pop grocery store somewhere in Texas. Mr. H. talked about the time some yankee had come in and asked for some jalapenos, (pronouncing the ‘j’ about like you would ‘jam’). His reply, as Mr. H. explained it was; “Sir I believe the Spanish call them jalapenos (pronouncing the ‘j’ like the ‘h’ in ham).” He then proceeded to explain that this was a terrible thing to do and that no-one should ever make fun of the way anyone else speaks, ever.
I wasn’t entirely sure that he had described an act of mockery, but that was a detail I could easily overlook. On the main point, the man was preaching to the choir as far as I was concerned. I was really glad I had signed up for the class.
And that’s when things took a bad turn.
Within just a couple minutes of announcing this principle that one shouldn’t make fun of other people’s speech, Mr. H. began to tell us all about the decline of the English language as a result of recent trends. Mr. H. was quite concerned that folks had begun to water the English language down with a variety of euphemisms. It was a terrible situation as our great medium of communication had been harmed a great deal by this trend.
Mr. H. had quite a few examples, but the first one that I can remember was the term ‘African-American’. Mind you, this was 1990 and the battles over political correctness were picking up steam fast. This topic had not yet run its full course in the public sphere; it hadn’t yet bored everyone to tears. My classmates sat on the edge of their seats while Mr. H. proceeded to explain that he had nothing but love for all God’s people, but he didn’t believe in calling people by the wrong word. You had to call people what they were, not what they weren’t. I sat back just a little disappointed and waited for Mr. H. to explain that ‘black’ was the proper name for the people in question.
Instead he proceeded to tell the class that ‘negro’ was what ‘they’ were and that was what folks ought to call them. I sat back up. He had at least surprised me. I had to give him that, but did I hear the man right?
Had I heard correctly? Was he actually skipping right past the common usage I expected of conservatives and moderates to rescue a sordid vocabulary choice out of a distant era? I listened on as Mr. H. insisted that he meant no disrespect by this term and that it had no insulting implications. ‘Negro” was the right word and nothing else would do. Those using the term ‘African-American’ were engaged in a full-scale assault on the English language, and she suffered terribly at their abusive treatment.
The rest of the class ate this message up. I mean they loved it! For my own part, I dropped right out of that choir he was preaching to.
My concern wasn’t entirely with the politics at hand. I was never fully on board with the PC approach to vocabulary, and I could think of reasonable concerns about a lot of the verbal practices at hand. But Mr. H. wasn’t producing reasonable arguments. In fact, he was demonstrating a levelnaïveté that I didn’t expect from someone who was about to teach a class in descriptive linguistics. Objections were crowding their way into my thoughts in such numbers I feared my mind might burst if I listened anymore.
– Mr. H’s assertion that there was a right word for this or any other topic and that anything else was poor use stood out like a sore thumb. By ‘sore thumb’, I mean a completely unsupported premise. Worse than that; this assumption flew in the face of pretty much everything lexicographers had to say about the subject. Words had multiple meanings, and topics could be referred to in a variety of different ways. Languages changes! You could argue pros and cons of different word choices, but Mr. H. just insisted there was a right word and the public wasn’t using it anymore. This was a bit like discovering your geography teacher was a flat earther.
– ‘Negro’? Seriously, ‘Negro’?
– Details aside, declensionist narratives about the state of a given language are tired and damned lame. Untold prophets have warned about the decline of English, each with a different sin on their minds, and each cherry-picking the evidence with all the shame of a child stealing fruit from a neighbors tree. In this case, there was the additional absurdity that Mr. H. wanted us to feel for the abuse of the English language even as he minimized concerns about the abuse of actual people. This was personification with an agenda, and that agenda had little room for concerns about folks who really could feel the effects of abuse.
– I really couldn’t square the entire theme of the lecture with the lesson Mr. H. had drawn from his first example. Were we not making fun of the way some folks talked? I suppose he was suggesting that advocates of politically correct speech were making fun of others, but he had gone well past correcting that and right into the realm of mocking their own vocabulary preferences.
– A bit depends on the presentation, but the notion that words like ‘African American’ are euphemisms contains at least one really ugly implication. If a euphemism is a word that makes something ugly sound better than it is, and that did seem to be the way Mr. H. defined it, then what did that say about his thoughts about the people this term was applied to? Was he not suggesting that the right word really did convey something bad. He denied this of course, but that really seemed to be the station to which his particular train of thought had been headed.
All of these thoughts and others crowded into my head and screamed for me to let them out. I couldn’t believe I was hearing this crap from a guy who studied language for a living.
I looked around and I saw over 20 students falling in love with this man.
It’s okay, I thought. I’m here for the lessons on grammar. This doesn’t have to matter. Who knows. Maybe, Mr. H. will respond well to challenging opinions. Should I say something now and see how he responds? But where to start? I thought about whether or not to field an objection as I just sat there and took in the horror show.
The straw that broke this camels back came when Mr. H. took up the use of the term ‘gay’.
Yep. He was against it.
Mr. H. told us that he would never use that word. He went on to explain that he would never condemn a man for being what God made him, but he believed in calling people what they really were. I thought surely that he was going to tell us the proper term was ‘homosexuals’.
What these people were, Mr. H. informed us was ‘faggots’.
No other word would do.
And Mr. H.’s fan club fell over themselves to show their appreciation for this point. It was quite the surreal experience for me, watching my classmates nod and stare lovingly at this performance. I thought surely I would soon be sick.
At this point, I felt like Mr. H. had enough rope. If I couldn’t hang him with it, I should at least be able to reign in the message a bit. And anyway, I really needed to see how he would respond to disagreement. So, up went my hand. Mr. H. called on me. And I proceeded to ask him if he didn’t think it more appropriate to consider ‘faggot’ a dysphemism (in retrospect, I should have just said ‘insult’). I went on to ask if he didn’t think the English language was growing new insults at about the same pace that it was growing euphemisms, or if he had specific reasons for thinking the one trend was outpacing the other. I think I managed to keep a respectful tone, but I definitely expressed my disagreement.
And the class grew silent.
The man literally scowled at me. In falling tones, Mr. H. asked me for my name. He then proceeded to dig the pile of index cards from the beginning of class out of his shirt pocket and slowly flip through the until he found mine. He then studied my card for a minute or two, all of this in utter silence. No-one said anything.
With a heavy sigh, Mr. H. finally placed the cards back in his pocket and looked back at me. “What I am trying to say is…” He then proceeded to restate his general thesis that English had been watered down through excessive euphemisms. He did this without responding to any of my points at all. It was amazing. There was no reference to anything I had just said, no answers whatsoever to my questions. No counterarguments. Nothing!
Mr. H. then asked me if that message was okay with me.
After a brief pause, I said ‘yes’.
By ‘yes’ I meant that I would be graduating without the benefit of a full course in grammar.
Sometimes idealization strengthens a value; sometimes it destroys it. The trick is to know the difference.
It gets more difficult to tell the difference when a value becomes central to one’s own life, or if it has become a commonplace theme in the community around her. Failure to follow a given value can become so unthinkable that dissonance reduction strategies simply overtake the effort to apply it to the miscellaneous judgement calls of daily life.
At the extreme end of caring about something, defense mechanisms become so strong that the rhetoric of rationalization simply eclipses the discourse needed to plan effective action. Thus, love becomes a foreign notion to much of Christianity, Reason and Logic brand-names jealously guarded by unbelievers, and self-reliance the hallmark of Americans themselves as dependent on others as any people ever were. In like manner, racism becomes unthinkable to liberals, notwithstanding the prominence of racial categories in our policies, and patriotism goes without saying to conservatives, even when they attack their own nation (literally or metaphorically). It is easy enough to see that talking-up a value doesn’t always mean living up to it; but things are worse than that. Talking up a value can sometimes chase any meaningful effort to put it into practice right out of the building.
I used to think about this a lot when I worked in Navajo country. Out there the value term with the most weight to it was hózhǫ́. This is usually translated as something like ‘balance’ or ‘harmony,’ and for many this is enough to tie the notion to themes better suited to American pop-Buddhism and New Age thought. In contrast to bilagáanas, diné (Navajos) were non-confrontational, at least according to common folk-wisdom on the subject.But it wasn’t merely outsiders that approached the concept in these terms; Navajos themselves sometimes use this approach to explain themselves to others.
This theme always troubled me, because it sure as Hell didn’t describe the people I knew and worked with. Sure I had seen plenty of situations in which I had seen diné show notable restraint or reluctance to engage in confrontation. But I had seen some spectacular confrontations in my days out there. More to the point, it had always seemed to me that conflict rested just under the surface of pretty much every item of business occurring in that area. The question it seems to me is not whether Navajos engage in conflict more or less than the average Bilagáana (white person); but rather under what circumstances will each do so and for what purposes. I think the answer to this question is different for Navajo than it is for Anglos, but I also think this requires a lot more subtlety than the oppositional stereotypes generally allow.
I had a boss out there who used to tell me that the sort of balance implied in the concept of hózhǫ́actually entailed a trace of conflict. Conflict too had its value in this ideal, he seemed to be telling me, and so it too had its place in the balance people strove to attain. So, I shouldn’t have been surprised to find a layer of conflict in the workings of folks who embraced this value. But sometimes I am a damned slow student. Years after I had moved on from that job, I think I finally got this lesson. I got the point while reading up on Henry Kissenger. Thinking of hózhǫ́ as a kind of Realpolitik is of course little more than replacing one metaphor for another, but I continue to think it is a helpful correction to the cosmic muffin concepts that saturated so much of the public discussion of hózhǫ́, at least when the rest of the conversation occurred in English. Even still, the distance between this value and the practices of those who hold it dear is vast, so vast that it seems often to escape the ability of folks to conceptualize the matter.
Which I suppose puts diné on par with the rest of us.
It used to drive me to tears, back during my brief stint as a moderator on the Internet Infidels message boards, when I would see some fellow heathen lecturing a Christian on the virtues of reason and rationality. Okay, this didn’t always bother me, but it drove me nuts those specific moments when the Christian was doing a damned good job of reasoning about the particular issue and the unbeliever not so much
Yes, that does happen.
I wouldn’t count myself an Atheist if I didn’t think that ultimately the most reasonable thing to do about gods is to just say ‘no’ to them. But the backing of reason needs to be earned in the details of a discussion, and which side will earn it is back on the table every time you decide to take up the subject. Like it or not, in some conversations about religious matters, it is in fact the believer that is doing a better job of reasoning. That really shouldn’t surprise anyone whose sense of human nature hasn’t been completely overdetermined by their sense of the battle lines in question. Yet in such moments, when the compelling argument just isn’t coming, leave it to the rotten-hearted to simply claim the cultural capital of a free thinking rational person and remind the believer that she isn’t in the club, so to speak.
That is the sort of hypocrisy I suppose I should expect in any camp, including my own, but it doesn’t make seeing it any easier. Take any given value, and you will always see a sort of tension between its motivating characteristics, the oughtness it urges on us, and its currency for those with some claim to that value. Ideally, one could expect those claiming the virtue of reason to be those who actually live up to it, but ideological movements and philosophical orientations also generate a degree of association with a given virtue. And for some, that is enough. They are more rationale by virtue of their allegiances; and little else need be said about the matter.
Likewise I will never accept the excuses that conservative Christians make for opposition to homosexuality. It is common enough to hear from folks that their stance on the topic is taken out of love, that they have gay friends, and that they are merely following the word of the Lord on this. (I’ll skip the example of the lady who re-assured me that she had nothing personal against gay people, because she loved Will & Grace. …okay, I didn’t quite skip it, but, well, …I can’t help myself sometimes.) Conservative Christians often cry foul when their position is described as hateful, insisting that we take their own motivations into account.
In my book, you measure goodwill by the way people treat others; and efforts to deprive gay lesbian folks of the right to marry, to adopt, or to security in the workplace make for a straight forward case of malice. Even without these concrete harms, the high suicide rates for those of homosexual orientation speak to the high costs that some folks pay for unwarranted stigma placed on certain sexual preferences. Against all this and more, the oft-repeated claims that one can oppose homosexuality while keeping to the admonition to love others starts to ring a bit hollow. The approach taken by conservative Christians against homosexuality makes of ‘love’ a mere footnote, an intellectual exercise in resolving an apparent inconsistency. It falls well short of living up to a virtue which could well be the shining light of Christian faith.
What has me thinking about this is a recent encounter with one of the ways this sort of problem is commonly expressed in ordinary language. I can’t think of any other way to put it, so I will just call it ‘vacuous Idealization’. What I mean to get at by coining this monstrous bit if vocabulary is a variety of rhetoric that cancels a value in practice by elevating it to a level of abstraction which is utterly meaningless.
Take for example ‘true love,’ which we are often assured isn’t selfish at all. But that’s not all that true love isn’t. It also isn’t carnal, and it isn’t fleeting. It really isn’t harmful to the one who is loved, and it most certainly isn’t conditional. True love doesn’t keep track of the time, and it doesn’t care how much money you have or how tall you are. True love is timeless, and true love is, …blech! I can’t go on.
By the time we get done with all the things true love isn’t, I can’t help wondering if anything is left in the category at all. And that I suspect is the point of ‘true love’; it is actually an empty set, with no concrete members no associated concepts to define it. Instead we get the illusion that true love has been defined by taking ordinary instances of perfectly human (and rather flawed) love and negating each of the flaws. We are left to believe that we still know what we are talking about when all of the frailties of human relationships have been tossed in the trash of love that is merely real, as opposed to that which is true, …pardon me True.
I call Shenanigans!
Real love looks nothing like this True love that people talk about. You notice when she gets in bed without brushing her teeth. And yes Real Love hopes that her relatives will take care of her when she needs them. Real love may not care how tall you are, but she’s damned glad you don’t have any really ugly birthmarks. And if real love hasn’t made a point of principle out of your race, your nationality, your political party or your religion, then she certainly does have a way of finding people most when they travel in the same circles she does. Real love comes and goes (dammit anyhow) sometimes without warning and without leaving behind any explanation for her visit, or her departure. And sad to say, real love does have her contingencies, much as we might wish otherwise. Real love always comes with the blemishes, and the do matter, and they don’t go away.
True love is little other than the hope of some ineffable residue left when we’ve taken out all the things that come with Real love in our actual lives. But that is a hope hung on an imaginary hook. If you take away enough of the things that come with real love, you end up with nothing at all. Sadly, I am inclined to think that may be the point of this kind of rhetoric. By stripping out the foibles of real human relationships and the attitudes that go with them, one ends up with a value that is whatever you will make of it. It is something that will never happen, a virtue no-one will ever realize, nor will they ever have to.
And being thus emptied of its meaning, True Love is the perfect predicate for an imaginary subject, to wit, “God is love!”
On a side note, and I will just throw it out there, I do think this is one the reasons those who emphasize the divinity of Jesus most seem least likely to emulate his actions and teachings. If he is a human, with real human foibles, then the stories told about him offer a real example of how one ought to live. If he is a God, though, well then who could hope to live up to that example?
Yes, I get that this is generally thought to be a paradox in that Jesus is commonly supposed to be both. And yet it is the nature of such enigma that one can only meaningfully speak of, or think about, one of its axes at any given moment. You can say of a paradox that it is both x and y, but you cannot grasp both at the same time. And of course believers do typically come with a marked preference.
In like manner, I think people often approach issues of objectivity in the most self-defeating manner. It is common enough to speak of a knowing subject and known object when framing different questions about how knowledge works. There is nothing particularly wrong with this, providing one understands the two as part of a relationship of sorts. Once folks start talking about the possibility that a claim could belong entirely to one or the other, the whole model gets rather misleading.
To put it another way, I think we can speak meaningfully about objective features in knowledge, or even of greater or lesser degrees of objectivity, but if objectivity is defined as the total absence of subjective input, well then that is epistemological failure on the horizon. Bringing this a little closer to actual contexts of reasoning, I often hear (or read) commentary in which people compare reasoning with emotion or logic with rhetoric, etc., the implication being that one must choose one over the other. In the popular imagination good reasoning does not appeal to emotion, and rhetoric is always a bad.
But of course the point of much good reasoning is rhetorical; it is an attempt to convince someone of something. Far from requiring an absence of emotion, this kind of project is often enhanced by a display of emotion. If you want people to care about something, then you ought to show them that you do too. Fail to do that and watch them doodle as you talk.
The bottom line here is that the quest for objectivity becomes mysticism when it is conceived in terms of purity. If the practice of careful judgement requires an absence of subjectivity, emotion, or conscious efforts at persuasion, then careful judgement resides in a world we have never been and never will be. In fact, we don’t have the faintest idea how to get there, because the very notion is simply nonsense.
On a related note, let us consider the notion of Truth with a capital T. I’ve long since lost track of the number of times I have been told that truth is unattainable, or heard questions such as ‘what is truth’ framed as though it were something ‘out there’, so to speak. Not surprisingly, this approach has the effect of rendering meaningless the mundane truths of daily life. Against the promise of this cosmic Truth, no mere fact could possibly hope to hold our attention. And so the quest for Truth so often becomes an escape from truths.
Countless sophomoric essays have been written about the unattainability of this grand truth …Truth. It sits like the Kantian thing-in-itself well beyond our mere mortal efforts to find it. Many are the ways people have found to explain our failure to find this elusive entity, hiding somewhere in the mountains of philosophical goodness. But the details are un-necessary, because the failure of this quest begins with the framing of the question.
We use the concept of truth (or falsehood) on a daily basis to help us distinguish between claims we agree with and those we don’t. There is a lot of room for disagreement over the nature of that process, and it’s a damned interesting question, but if any theory of truth doesn’t address that sort of process then it is already headed down the wrong path from the outset.
Ultimately, questions about truth are less a matter of discovering a fact in the myriad lands of facts about the world around us, than it is a question of figuring out what means to say that something is true (and how that possibility relates its alternatives). Questions of truth value often involve great concepts and momentous philosophical questions, but they also occur in the context of topics of little importance, some of them being outright dull. I know that I consider it true that the Dr. Pepper I am drinking is too warm and false that the weather is nice outside. (I live in the arctic; what did you expect?) Any theory about the nature of truth that separates it entirely from such mundane matters is less a theory about truth than a hijacking of the notion for some other purpose.
What is Truth?
If you really must go on a quest to discover the answer to this question, then don’t let that quest
On a related note, and because it fits the pattern, could someone please tell the boys from Chicago what time it is. It is a good song, but seriously, does anyone really know what time it is?
We know what time it is, because time is not a thing to be known independent of human reckoning. If the conventions of human discourse say it is 5:30pm, Alaskan Standard Time, then it is 5:30pm, Alaskan Standard Time.
To make the question more complicated than that is not a quest for something profound; it is a dramatic self-indulgence.
Yes, I’m a lot of fun at parties too.
And with that the rant is nearing its end. If you are still reading this, then you have more patience than I do, and I apologize for tramping through matters both sacred and profane as well as a good many points in between. But of course that is my point, so to speak, that in effect the two extremes may at times prove to be one in the same. When a value becomes too important, even to conceive the possibility of transgressing against it, then people remove it from conscious thought in ways that parallel the treatment of things they abhor. Such sacred values can cease to be an effective means of motivating people, precisely because they mean too much to allow for the full range of human possibilities. Worse yet, people sometimes seem to take a value down this road for the very purpose of cancelling its bearing on daily life. Either way my point is that you should be careful about just how much you care about such things, because somewhere past “a lot” lies “Fuhgetaboutit!
You’d think a sentence like that would have a pretty clear meaning, wouldn’t you? If that whole 3 word sentence is a little complex, then surely the single word “like” must convey something pretty simple and obvious.
Unless it doesn’t.
But before I go on to suggest what I mean by that, let’s take a moment to note that that word alone is creeping (by itself even) into more and more of our public discourse. (Discourse? Now there is a word I haven’t used in awhile.)
It seems rather innocuous, the little “like” button underneath a Facebook entry, a Youtube video, or a post on WordPress. I can see another one right now up in Stumbleupon bar above the page I’m working on. I’ve long since lost count of the number of discussion forums that make use of similar conventions. Let’s not even get into the whole reddit thing okay! My point is that an awful lot of mass communication these days comes with the invitation to express our approval in terms of an upvote, like button, or some similar device. Ever greater portions of our news and entertainment now come with a prefabricated seal of approval just waiting for us to click yea or nay and thus to make ourselves heard.
…in a really limited way.
But what does our little click of approval mean? What these buttons mean to us and what they might mean to the websites that host them isn’t always clear. Often, the significance seems pretty obvious. You liked what you read, listened to, watched, or otherwise consumed. But sometimes, there is a twist to the content, something that skews the meaning of your approval. If you are reading a news article about a political speech you like, I’ll bet you are happy to give your blessings to both the speech and the article with a single click of a button. But what about a well written piece about a political speech by that fart-for-brains bastard you can’t wait to vote against? Well, then the ‘like’ button only applies to the article itself, right? …or do you refrain from clicking the ‘like’ button at all in cases like that? We don’t have a button that helps us to distinguish content from style or subject matter from the simple decision to call our attention to it.
And I’m sure most of us are familiar with the dilemma posed by a friend describing on Facebook something awful they’ve just experienced. Suddenly the like button just isn’t quite the tiny gesture of personal support that it has been for the last hundred or so mind-numbing left clicks we’ve executed while watching bad TV or not-quite-reading the memos at work. So, you sit there for a moment and think about it before telling yourself you better actually write something this time. And since it’s significant and personal, you’re going to have to think about it and choose your words carefully. …dammit!
But perhaps there is a sob-story in this too
I have 5 minutes of time to kill, and I want to enjoy it by reading funny stories from my friends and thanking them for it with a simple click of a button. I’m even happy to cheer folks on when they tell me good things about their lives. But now one of my close friends has just experienced a major tragedy, and she snuck a note about it into this stream of otherwise happy-and-light fluff I am using for my entertainment. Now I feel obligated to say something meaningful, and I’m really not ready to get all emotional, and fuck I only have 2 minutes left before I have to do something anyway, and I have no idea what to say. Fuck!
Presumably this sort of Facebook entry would create a similar tragedy for anyone with enough heart to know just how frustrating that kind of moment can be. The social niceties of liketry can be very complex. We need a button that says; “I don’t really like what you’ve just described but I like you and want you to know that I support you in your struggles, …at least enough to press a button about it.”
On WordPress at least, hitting the “Like” button a little akin to saying “Hey baby!” It is often a way of telling someone you exist and inviting them back to your apartment. Whatever else the ‘like’ button means around here, it is also a potential means of hinting that someone should come visit your own blog, where of course you hope they will read and like your own material. …which is what one will likely presume when you see that they have hit the like button underneath your own article.
…unless it means that they just want you to come back and read their new post.
The possibilities of mutually re-enforced self-deception here are astounding! Sometimes I think it is entirely possible that nobody is reading anybody’s work anymore, online or otherwise, or even looking at the pictures. Could WordPress be a community of illiterate button-pushers, liking each other in one great big orgy of self-referential liketude? …with nary a word ever making its way into a single skull!
I can’t think about it anymore; that way lies madness!
I suppose the fact that giving gestures of approval may be a means of getting them back didn’t exactly begin with the internet, but sites like WordPress have certainly re-arranged the economics of liketry in new and interesting ways.
By ‘interesting’ I probably mean ‘just a little sickening.’ …yeah.
I recently got a bit of an object-lesson in what it can mean to ‘like’ something on Stumbleupon. You see, when I first started using that service, I wasn’t entirely sure how I wanted to set my standards for liking a webpage. Was it enough if I liked something a little? Or did I want to be a hard-sell and like only the very best of the very best? It really didn’t take long before I realized that there are real advantages to liking more pages (more followers being chief among them), so I loosened up a bit, but I still insist on somehow keeping a trace of sincerity to the whole thing. I don’t ‘like’ things that I don’t actually like.
For the first month or so I followed my usual approach of restricting approval to those things about which I could voice clear and unmitigated approval. I ‘liked’ only those things which I really did like, completely and unreservedly, from the bottom of my soul, …or at least my liver. I held back from approving many thoughtful articles on a range of interesting subjects because I had a problem with something in the third paragraph of this one or the specific language used in expressing a minor point in that one. Pictures on the other hand? Well, I found quite a few of them to be like-worthy, not the least of reasons being that I’m not a photographer. I wouldn’t know how to pick at them if I wanted to, …well not that much anyhow. The point is, that I liked a lot of pictures.
Of course the thing about Stumbleupon is that the site shows you more of what you like and less of what you don’t as you establish the difference by clicking those buttons. So, I suppose I should not have been surprised the that thoughtful articles on religion and politics answered the call of the stumble button with ever decreasing regularity, or that they had been replaced with images of kittens, sunsets, and street art. The more time I spent on Stumbleupon, the less useful information I got from it.
I figured this out when I heard a strange and stupid voice saying; “this site is useless for anything but lolcats!” The voice was of course my own. A moment later, I think I called myself an idiot.
At least I should have.
Because of course I had been telling the Stumbleupon site to supply me with frivolous content all along. Every time I hit the ‘like’ button I was effectively saying “more of these please.” And since I was only saying that when I looked at things about which I had few serious concerns, I was pretty much telling the software demons at Stumbleupon to keep it light and fluffy when they chose my content.
Once I figured this out, the remedy seemed rather obvious. If I wanted to see more interesting material, I was going to have to give a pass to the next pretty scene and (more to the point) swallow at least some of my reservations long enough to say ‘yes’ to a opinion piece or three. I made the adjustment, and today I am finding the material I get from Stumbleupon far more interesting than I did at the end of my first month on the site. I simply had to stop thinking of the ‘Like’ button as a sign of ultimate approval and start thinking of it as a sign of general interest, or even an outright request for more of the same sort of content.
Of course that wasn’t the end of my adjustments. I find that my likes page at Stumbleupon includes articles I really don’t agree with at all, but which I might want to read again, anyway, or to reference for purposes of one of my classes. Somewhere along the line it dawned on me that I could use my Stumble account as a kind of caché for anything of interest to me in any way. So, the ‘like’ button on Stumbleupon no longer means as that I actually approve the content at all; it means that I am interested in reading it again. Sometimes it means I dislike the content of an article enough to want to come back to it, …probably to pick a fight of some kind over the matter.
So, I guess I do ‘like’ something that I don’t like, which is a fact that I don’t like at all.
…I need a drink.
Lest you think this ramble is entirely about the trivialities of internet liketry, I should say that the whole Stumble incident has me rethinking my overall philosophy of likalism. yes, it is. I’ve always been reluctant to place my stamp of approval on most anything in life, and I can’t help thinking this business showed me something interesting about the mental landscape that produces this pattern. Perhaps, I’m a little to prone to hold a flaw or two against the overall value of an otherwise interesting work. Would it be wiser to think of my approval less as a pass on the problems and more as a sign of interest?
Then again, I’m not much sure if I like what I’ve just written. I mean what the Hell? Somewhere in here I touch on some really interesting questions (or so I thought) about how the net skews our sense of meaning and commodifies approval, …and then I end up with this quasi-self-help lesson. I hate self-help lessons! Seriously, how the Hell did I veer so far off the path on this one!?!
More to the point at hand, the term seems to be an awfully bad fit for a lot of the things it is commonly used to describe.
When I was teaching on the Navajo Nation, I used to illustrate this by asking my students; when you hold a healing ceremony, who comes? The answer was always something to the effect of the community itself, friends, relatives, etc. What happens if you don’t believe in the effectiveness of the ceremony? Frankly, I don’t think the question came up very often, at least not in the context of deciding who belonged at the ceremony, but I did once meet a woman who had effectively answered it. A born again Christian, she stayed at the main house during the chants and entered the Hogan to help serve food during the breaks. She thus met her family obligations without implicating herself in a ceremony that was anathema to her own beliefs.
When I asked my students who goes to a church, the answer was invariably something along the lines of its members, believers, etc. Catholics go to a Catholic Church, Baptists to a Baptist Church, and so on. Of course this doesn’t mean that others aren’t welcome at a given church, but there is a distinct sense that the church exists for those that adhere to its doctrines. Those testing the waters will be expected to make a choice at some time.
Which brings me to another point, a religion can be modeled as a debate stance. Who belongs to a church? In many cases, we can literally trot out a range of statements and ask people whether or not they will vouch for the truth of those claims. “God Exists.” “Jesus rose from the dead.” You get the idea. Say ‘yes’ to the right statements, affirm one’s beliefs that they are true, and you are in the club. Say no, and you are out. Whatever else is happening here, it is a process of segregating folks according to an imagined argument within a larger community.
When I used to post on christianforums.com (CF), this was explicit policy for many years. Those who affirmed the Nicene Creed (or perhaps the Apostle’s Creed) could count themselves as Christian and post in the Christians-only sections. Those of us who could not were asked to restrict our posts to the open-debate areas. The policy varied in its details from time to time, and as I recall it changed rather dramatically a few years back, but when I was there at least CF policy fits the model I am proposing, membership in the faith, as it was defined on CF could be determined by one’s willingness to back a series of truth-claims.
So, what is the difference?
I’m about to paint it in pretty broad strokes, but I’ll warrant the paint gets more or less within the proper lines.
A religion is defined in terms of beliefs which consist of the willingness to vouch for the truth of a claim. A native ceremonial system is defined in terms of community membership and participation. Of course there is considerable overlap between the two. People expressed a number of beliefs connected with Navajo ceremonies, and churches can be remarkable community institutions. But as with any other questions of value, it is the priorities that count. Failure to vouch for essential doctrine gets you out of a church. It doesn’t get you out of a Navajo ceremony, at least it didn’t when I was there.
So, what is going on here? I would suggest, the point of the ceremony is at least partly to unite the community, to get them all involved in something of great importance to the community at large (the health of its members in the Navajo case). What is the point of the religion? Well it is at least partly to distinguish a select membership from some larger community. A religion isn’t simply about what group you belong to; it is about what separates you from those others. What a native ceremonial system unites, a religion divides.
Some might find that shocking, or at least counter-intuitive. Often when religious debates get rather heated, someone will lament the divisiveness of the issue and give a variant of the “can’t we all just get along” speech. The sentiments are noble enough, but I often wonder how many times people can see the process of division before it sinks in; that is what is SUPPOSED to happen.
Rainbow Bridge (Sacred Site)
Of course both ceremonial systems and religions unite as well as divide, but they do so on different parameters. The ceremonial system unites people along the lines of an established community, it gives people who share in a range of political and economic interactions a means of emphasizing their connections. A religion carves off a notch of those people and sets them in ideological opposition to others in their community.
So, this is my particular take on a running theme in Native American studies, the unfitness of “religion” to the understanding of Native American practices commonly described using precisely that term. The problem was particularly critical to the workings of a Federal law passed in 1978, The American Indian Religious Freedom Act, which I happened to study for a bit. The law had a rocky history from the start, and at least in the early 90s (when I studied the matter) an awful lot of people were disappointed in its application to real life.
It was easy enough to say that various indigenous practices raised a lot of First Amendment issues. (Well at least it was in 1978; the prior history of willful abuse is dismal, and a topic for another post.) But actually extending Free Exercise protections to Native American “religious” practices proved very difficult. How do you protect the right to prayer when that might mean a lot more than a moment of silence or even a few words spoken in a certain posture? What do you do about ritual paraphernalia at border crossings? How about odd dress in schools or prisons? How do you deal with strange substances? Nevermind peyote; a simple smudge-pot can really screw up a paradigm! …and (this was the real sticking point) what do you do about access to sacred sites on public lands, especially sites that might not be so sacred anymore if someone builds a road or a fast food restaurant in the vicinity?
See, the problem was that native “religious” practices simply didn’t fit into the niche already carved out for religions within the American political economy. So, time and again, when Native Americans sought to enjoy their religious freedom, they found some official or judge who couldn’t (or wouldn’t) grant that protection. The necessary relief always seemed to be too much to ask, and the resulting case-law was dismal to say the least.
So, what was the problem? At least some folks figured it lay with the key term “religion.” It just didn’t fit. The practices in question may have included enough of what people call ‘religion’ to get the issue on the table, but they weren’t restricted to quite the semantic domain one normally expects of things described using that term. The contents Native American “religions” thus tended to spill over into other social terrain. Where western religions had learned to reside in the spaces between other public matters, their Native American analogs didn’t even come close.
So, if the term “religion” doesn’t fit, what does?
It really is difficult to answer that question. We can of course use the term “religion” anyway, but the warrant for its use is analogical, and my point is the analogy breaks down, often in really inconvenient ways. A common practice is to talk about native “spirituality,” but the chief benefits of “spirituality” seem to be that the term means just about anything you want it to mean, which is not an argument in its favor.
My own solution is to focus on the ceremonial practices. As the community-building functions of those ceremonies take priority over the argument-framing functions, those practices naturally stretch into social interactions well beyond those of religions. Of course this way of talking about the issue involves a judgement about priorities; it is a claim about what matters most. So, I won’t be too offended if someone opts to go another route.
Yes, I will. Let’s fight about it!
Anyway, what interests me about this is that it is the other half of a coin to my own situation when it comes to the subject. Religion obviously doesn’t do much for me, and as my last post ought to have established, I obviously think there is something about religion that is NOT part of my life and thinking. What that is, is another question, and admittedly a satirical post isn’t really going to nail it down. So, I am trying think my way through that issue (for the umpteenth time) by looking at people who may have a similar problem.
…and by “similar” I probably mean “opposite.”
If I as an atheist lack something falling under the heading of ‘religion’, the people I am talking about seem to have a surplus of it. Where the term denotes something I don’t want in my life, it denotes something that falls well short of what they want in their own lives. Where use of the word “religion” commits me to too much, it commits them to too little.
Either way, we have a problem.
The Hogan picture comes from the website, Virtual Tourist. It is part of the Navajo Museum and Visitor’s center in Window Rock, AZ. The sandpainting is from navajopeople.org which includes a nice description of its symbolism and ritual significance. The picture of Rainbow Bridge comes from Destination360. It was the subject of sacred site litigation in Badoni v. Higginson, one of many sacred sites litigated in the 70s and 80s.
Okay, so we just started a section on slavery and the civil war in my American history class. One thing that always irritates me here, or maybe it just amuses me, I don’t know… Anyway, I think about it whenever I cover this subject. Every textbook I have ever used on American history explains that California was admitted as a free state under the terms of the Compromise of 1850.
So, what’s the problem?
The problem is a little known law passed in California that very year, ostensibly for the protection of Indians. The law imposes a $50.00 fine on anyone forcing an Indian to work against his will. So, that should be good, right?
The law also contains the following provisions:
When an Indian is convicted of an offence before a Justice of the Peace punishable by a fine, any white person may, by consent of the Justice, give bond for said Indian, conditioned for the payment of said fine and costs, and in such case the Indian shall be compelled to work for the person so bailing, until he has discharged or cancelled the fine assessed against him…
Any Indian able to work and support himself in some honest calling, not having wherewithal to maintain himself, who shall be found loitering and strolling about, or frequenting public places where liquors are sold, begging, or leading an immoral and profligate course of life, shall be liable to be arrested on the complaint of any resident citizen of the county, and brought before any Justice of the Peace of the proper county, Mayor or Recorder of any incorporated town or city, who shall examine said accused Indian, and hear the testimony in relation thereto, and if said Justice, Mayor or Recorder shall be satisfied that he is a vagrant, as above set forth, he shall make out a warrant under his hand and seal, authorizing and requiring the officer having him in charge or custody, to hire out such vagrant within twenty four hours to the best bidder, by public notice given as he shall direct, for the highest price that can be had, for any term not exceeding four months; and such vagrant shall be subject to and governed by the provisions of this Act, regulating guardians and minors, during the time for which he has been so hired.
Oh there is a lot more to the act, and plenty of reassuring clauses that appear to keep people from exploiting natives, but it should not take a lot of imagination to read between the lines here and see how this story actually went down. To say that this law opened up the native labor-market to exploitation would be putting it mildly. …too mildly.
In essence, the law made it illegal to enslave an Indian, at least on one’s own initiative, but if someone was caught being an Indian on a city street, the city could bond him over to you for a price. Oh yes, folks would have to go through the trouble of slighting the moral integrity of the Indian first, but how difficult do you think it would be to find a white guy willing to do that?
It’s not the most efficient form of slavery one could devise, but it is slavery non-the-less, and that is why it always bugs me to see textbook after textbook announce that California was admitted to the Union as a free state under the terms of the compromise of 1850.
…in the very year they created a legal procedure for enslaving Indians.
Oh I get it; this kind of issue simply falls outside the scope of the narrative in question. It was not even on the horizons of those debating the major issues of the day in Congress. So, if one is recounting the events leading up to the Civil War, then this piece of information does not really change that story much. Neither does the existence of a viable slave-trade in the interior Southwest. If one is focused on the question of slavery as it was framed in the national politics of the day, then yes, California was certainly admitted as a free state.
Or is that the problem, the terms of that debate?
The bottom line is that ‘slavery’ is just a word, and you can choose to use it or not as easily as you can any other term regardless of the realities of the labor conditions in question. So, historians can skate right past these instances of captive labor (much as the great figures of the era did in their own approach to the issue) while focusing on the institutional forms of slavery that were the main issues of the day. But of course that same sleight of hand is necessary to cap off the story of the Civil War in the standard way, describing it as bringing about the end of slavery in America.
To give closure to the issue of slavery in our national storyline, one has to ignore the use of debt-peonage in conjunction with Jim Crow Laws, or at least classify them as a whole new kind of problem. Using the word “slavery” in the chapters leading up the Civil War and dropping it afterwards creates the illusion that the new social problems are significantly different than the old ones. This approach suggests that the problems associated with slavery were somehow resolved with the closing chapters of Reconstruction, perhaps not to the satisfaction of all concerned, but resolved nonetheless. And Jim Crow then becomes a whole different kind of problem, as do a host of similar practices.
Just like the California Law for the protection of the Indian.
Note: The law can be found in the California Statutes from 1850. It is also included in the primary documents for the following textbook:
Albert L. Hurtado, Peter Iverson. Major Problems in American Indian History: Documents and Essays. Second Edition. (Houghton Mifflin, 2001).