The Usual is Suspect

| | Comments (2) | TrackBacks (1)

As I wrote a few days ago, it's the season to be hearing about whether or not our proposals for our field's flagship conference (CCCC) have been accepted or rejected. Jeff and Jenny submitted a panel that wasn't accepted, and there's been a little grousing about the rejection. Like Jeff, I've had very good luck with proposals--I've only been rejected twice, I think, out of 11 or 12 proposals. And yet, every year, I know of very good people who have proposed interesting panels who don't get accepted. I know that this is probably the case for most people--we all think that our friends are the ones getting overlooked unfairly--but I thought that I might take a crack at explaining just what I find suspect in the whole CCCC process.

I'm not a disciplinary historian, but I know a few things. CCCC has grown subtantially in the past 20 or so years, and as a result, certain steps have been taken to insure fairness in the selection process, namely:


  • "No Multiple Submissions" - thousands of people submit proposals, and restricting them to a single proposal gives each person an equal shot at acceptance

  • Blind review - again with the equality. Insofar as a review process can ever be blind, proposals are ranked on merit rather than brand name recognition

So far, so good, or okay at least. Both of these measures exist for defensible rationales. What I find to be a lot less defensible, however, is the opaque process by which the reviewing takes place. Each year, the process is overseen by a new person (the CCCC chair), and that person is responsible for assembling the team of reviewers who make decisions about acceptance or rejection. As far as I can tell, though, there are three major ways of meeting that responsibility, i.e., choosing reviewers:

  • Knowing someone who has expertise in the necessary area

  • Knowing someone who knows someone who has expertise in the necessary area.

  • Finding a person from a previous year in an area and asking that person to repeat.

The problem here is that, in basically each case, reviewers are ultimately chosen because they know someone. I don't dispute their qualifications, but I dispute the idea that this process results in a team of reviewers that is representative of the field. More likely is that it represents a given Chair's socio-professional network. And it rewards those people who "know someone." The system we've got carries a great deal of "insider inertia," and inevitably, that inertia is reflected in the program each year. I haven't done this research yet, but it wouldn't surprise me at all to learn that there are certain graduate programs disproportionately represented among the reviewers. I know for a fact that there are certain schools that are underrepresented, believe me. And this has an effect on the kinds of scholarship that are more or less likely to be accepted.

Whether this effect can be demonstrated or not, I don't know, because the materials necessary to do so are not made available for research. I know how I'd do it, though. More to the point here, though, is that the inside/outside quality of the review network means that the "fairness" they've achieved in the process is more limited than most people think. Reviewers may be "blind" to individual proposers' names, but they are not blind to the rhetoric of the proposals themselves, which is influenced by the training those proposers have received. And as a result, I've ended up doing well by retraining myself to write CCCC proposals in a certain way, by not appearing to come from a particular school. I've learned to perform my proposals in a way that's proven pretty successful.

Unquestionably, it's an ideal to say that I shouldn't have to do that. But there are ways that this situation could be made fairer. First, the process itself could be more transparent. I work to make it as transparent for our graduate students as I can, but there are certain policies that work against me here. Second, the selection process should be opened up to the membership--at the very least, as someone with 10 years of experience in computers and writing, regular participation at the conference, and fairly frequent publications, I should have the option of reviewing proposals, an option I won't have until I "know someone" who's Chair. And that's not right. There are lots of people who have the expertise, but lack the connections, and those people (myself included) are shut out of the process. No mechanism exists by which we might volunteer to be reviewers. It would cost very little to assemble a database, a qualified pool of potential reviewers from which Chairs might draw each year for given topics. It will ultimately be their choice, of course, but the rationale for those choices should be available for scrutiny, and should be based on more than acquaintance.

I'm not criticizing specific Chairs here, but rather the system. Since CCCC has grown to its current size, the fact of the matter is that there is no single person who can know the entire field. Reviewers are shortcuts in this regard. Without any kind of formal process, those shortcuts are based on the best available information. So why not make that information genuinely the best available? Rather than assuming that a Chair is omniscient upon election, provide that person with the data from which s/he can make informed selections. There was a time when the current process made sense, when it was possible for a Chair to "know" the field well enough to select reviewers. That time, however, is past--and it's time our processes caught up with our disciplinary realities.

1 TrackBacks

Listed below are links to blogs that reference this entry: The Usual is Suspect.

TrackBack URL for this entry: http://www.collinvsblog.net/cgi-bin/mt/mt-tb.cgi/375

In the last few days, there's been some good, thought-provoking discussion of the CCCC review process. The posts, in order: Jeff,

Read More

2 Comments

C,
I really think the "no multiple submissions" thing is a big problem for 4Cs. You seem to disagree here. How's come? The whole process seems rather faulty to me, though I'm not sure exactly how to make it better. All I know--as I said in my post--is that NCA doesn't seem to have the same problems, though they're just as big as
"we" are.

Multiple submissions only increase your odds if a significant number of people don't avail themselves of that option. And then your odds decrease if you allow one person to hold multiple speaking roles.

Say you've got 1000 slots and 2000 proposals. Right now, that means 1000 people, 50%, get accepted. If we allowed multiple submissions, then perhaps you have 2000 people submitting 3000 proposals, only a third of which get accepted. If you've submitted 2, then maybe your chances go up to 2/3. But that assumes that there's still one slot per person. If you allow one person to hold multiple slots, then the odds slide back down, even with multiple submissions, and the people who only submit one proposal are in even worse shape.

Of course, my numbers are all fake here, but I don't really see the problem that multiple submissions would solve. Honestly, I don't know how NCA manages the logistics of juggling so many speakers with multiple commitments--I've got to think that it's an even bigger planning nightmare than CCCC.

cgb

Leave a comment

Archives

Pages

Powered by Movable Type 4.1

About this Entry

This page contains a single entry by cgbrooke published on September 11, 2004 12:17 AM.

Server Meltdown, revisited was the previous entry in this blog.

Meltdown, the final chapter is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.