Popular Posts

Caveat Emptor

The opinions expressed on this page are mine alone. Any similarities to the views of my employer are completely coincidental.

Thursday 2 October 2014

How sociologists make themselves absurd

I'm struggling to articulate a half-formed thought in this post so you may prefer to skip it and wait  until I manage to state it more sharply.

I was struck yesterday when I was skimming the replies to Lucas and Szatrowski's (LS) article on QCA  by the unwarranted assumptions that were being made about what LS believe about  quantification in empirical social science. It reminded me a little of some aspects of my exchange of views last year with David Byrne (here, here and here) though I'll readily admit that the intellectual quality of our "debate" was not nearly so elevated as the one that LS are involved in. I also see a connection with  some of the highly polemical pieces on the use of statistics in the social sciences written by Stephen Gorard (see for instance here and here).

The common thread is something like this:  the protagonists are not necessarily anti-quantitative but they seize on some aspect of poor practice that is widespread in the journals - inappropriate applications of  RCTs, unbelievable instrumental variables, misunderstandings of the meaning of confidence intervals, silly conclusions drawn from  null-hypothesis testing, giant fishing expeditions, enormous variable races, fatuous assumptions about causality - and then instead of drawing the conclusion that students and researchers need to be better trained, referees better informed and editors more sophisticated in their appreciation of what applied statistics can and can't do, they infer that the problem is not with the users or with the misapplication of the standard tools but is with the tools themselves which must be discarded and replaced with something else. 

Thus we get remarks like:

 "This paper confirms that confidence intervals are not a generally useful measure or estimate of anything in practice." Gorard

"The conventional quantitative programme in the social sciences has told us very little of real interest." Byrne

So the baby gets chucked out with the dirty bath water and the innocent are encouraged to reject the conventional tools before they have even had an opportunity to learn what they are good for.

So my puzzle is this:  why do apparently intelligent people choose to espouse such extreme views?

Let's set aside one possible explanation - that they are blathering about things they don't really understand. Maybe it's true, maybe it isn't: I'm not going to go there.

Another possible explanation is the incentive structure in social science publishing. All the incentives point in the direction of making big bold statements that catch the eye, contradict established positions and have a clear novelty take home message. Recommendations for cautious, incremental changes and improvements tend to be ignored  in favour of the shock of the new. Thus an article that argues quite a few people don't really understand what confidence intervals are good for is not as attractive to an editor as one that argues confidence intervals are completely useless. If you want to catch the eye in sociology then waving a banner declaring 'Revolution Now' is more likely to help you build a career than one that says 'Let's try to improve things a little bit'.






8 comments:

pe51ter said...

Last night I read "The widespread abuse of statistics by researchers: what is the problem and what is the ethical way forward?" by Stephen Gorard. This is one of the papers that you refer to in your post.

In the first part of the paper Gorard argues that statistical methods (ie significance tests and confidence intervals) should not be used when dealing with a whole population or a biased sample. I agree with this point of view although some statisticians argue that it is OK to use statistical methods on a population when individual observations are subject to measurement error (as opposed to sampling error).

In the second part of the paper Gorard argues that statistical methods are useless for true (unbiased) samples. This is nonsense. He is simply demonstrating that he is very confused. He also uses a tactic that I have seen once or twice before. He argues that two different populations could give identical samples that would lead to identical significance tests and identical confidence intervals. So he is trying to demolish an entire body of well-established theory (that deals with uncertainty) by dreaming up very very very unlikely scenario.

Gorard's reasoning for confidence intervals being useless is different to his reasoning for significance tests being useless. He doesn't seem to realise that the mathematical basis for both is identical and hence they are identical.

Gorard keeps saying that a null hypothesis and a random sample tell us nothing about the rest of the population that has not been sampled. If this were true mathematical statistics would not exist. A sample can tell us a great about the population from which it was sampled. To use his own example: three red and seven blue sampled from a bag containing 100 balls tells us that the blue:red ratio in the bag is much more likely to be 7:3 than 3:7. But Gorard argues that a true population ratio of 7:3 and one of 3:7 could both give identical samples and that such a sample could therefore not help us distinguish between the two hypotheses. In fact when the true ratio is 7:3 the probability of a sample ratio of 7:3 is 0.281 and the probability of a sample ratio of 3:7 is 0.006 so a sample ratio of 7:3 tells us that the true population ratio is much more likely to be in the region of 7:3 than in the region of 3:7.

Colin said...

I agree. There is such a great fog of confusion in both the Gorard pieces that it is difficult to know where to start putting it right.

To give another example from Gorard - something that is intended to help students learn quantitative methods:

"It would be incorrect to use CIs in any situation where their underlying assumptions were not met. Any deviation from normality in the achieved sample data would mean that the mathematical basis for calculating a CI no longer applied. Thus, in the vast majority of social science situations, where data does not describe a perfect or even near-perfect normal distribution, CIs would be misleading and should be avoided. "

http://www.evaluationdesign.co.uk/resources/qm-6-confidence-intervals/

Er...no. The normality of the achieved sample data is irrelevant. If the population distribution is non normal then one would hope that the sample distribution would reflect that. As long as n is sufficiently large the central limit theorem tells us that the sampling distribution of the sample mean (which is what we care about for estimation purposes) will be normal (approximately ie good enough for all practical purposes).

If an "expert" can be so wrong about this, which is just a matter of fact, it doesn't inspire confidence in things that are more a matter of judgment.

What is astonishing is that in UK social science the sort of people that routinely make mistakes of this sort (and there are very many of them) are regarded in some, apparently respectable, UK universities as experts on methodology, serve on ESRC panels, are awarded ESRC grants for methodological research, referee journal articles and research grant applications and generally are invited to tell other people how to go about doing their research.

Something is really rotten here & it is a bit of a scandal that this situation has been allowed to continue for so long.

pe51ter said...

I have now read "Confidence intervals, missing data and imputation: a salutary illustration" by Stephen Gorard. It is truly awful. The problem seems to lie with Gorard's poor understanding of basic statistical concepts and all of his nonsense deductions follow on from there. So, for example, he seems to think that in order to compute a confidence interval we assume that the sample mean is equal to the population mean. This is not true. A population mean is a parameter with a fixed unknown value. In classical statistics we can make assumptions about unknown parameters but we can't make probability statements about them. In contrast sample values and sample means are variables (hopefully random), so we can make probability statements about them but we can't make assumptions about them.

I have taken the trouble to register with the "International Journal of Research in Education Methodology" and I'm tempted to submit a critique of Gorard's paper.

Colin said...

Indeed. IJREM seems to be one of these grey area journals on the borderlines of legitimacy, one step up from vanity publication. I had never heard of it so I looked into it a bit. It's clear that they are not edited in the US as they appear to claim, but in India & they are part of a stable of titles all offering "peer reviewed" publication for a fee (though actually quite a small fee).
It's obvious that whatever peer review goes on is very literal in the way of defining peers. Clearly in their minds the appropriate peers of people who write articles about subjects they don't understand are other people that don't understand the subject!
I doubt it's worthwhile pursuing Gorard in IJREM as its readership is likely to be very limited.

Didier said...

I'm quite divided about this. On the one hand, how about it doesn't really matter what's in these papers? Should we even bother? After all, if they get any attention at all, aren't these papers simply preaching to the converted? The might provide a person who wasn't going to use quantitative methods anyway with a warm feeling. On the other hand, it's quite worrying. Isn't this part of what lures young academics into non-science (that may look like science)? It sadly reminds me of a professor criticizing my using survey data because we cannot be sure that all respondents understand the question right (no systematic bias was suggested). The alternative I was suggested was doing a handful of in-depth interviews...

P.S. If you have any views on what lures students into content-less sociology, I'd like to hear them.

mel.bartley@gmail.com said...

Colin, you are so unworldly, you don't seem to link the picture of your book Cradle to Grave with anyone who is selling the thing

Colin said...

Available from an online store bearing the name of a mythical tribe of martial women.

After John Holmwood decided he didn't want to store & distribute from his garage & Sociology Press was wound up, the rights were acquired by Routledge.

There is a slightly funny story about Sociology Press. The BSA claimed some connection with it & had a link for several years from their website. The link though was actually to the wrong publisher - a Sociology Press publishing out of the US with no connection (as far as I know) with Holmwood's operation.

I thought it symbolized perfectly the BSA's attitude towards the project.

Anonymous said...

Interesting. Prof. Gorards papers are receiving quite some traction with the UK government at the moment.