Criminology researchers had a bad look going for them in the Stewart/Pickett debacle. But a recent exchange shows to me behavior we would all be better if we emulated; a critique of a meta analysis (by Kim Rossmo) and a voluntary retraction (by Wim Bernasco).
Exemplary behavior by both sides in this exchange. I am sure people find it irksome if you are on the receiving end, but Kim has over his career pursued response/critique pieces. And you can see in the retraction watch piece this is not easy work (basically as much work as writing an original meta analysis). This is important if science is to be self correcting, we need people to spend the time to make sure prior work was done correctly.
And from Wim’s side it shows much more humility than the average academic – which it is totally OK to admit ones faults/mistakes and move on. I have no doubt if Kim (or whomever) did a deep dive into my prior papers, he would find some mistakes and maybe it would be worth a retraction. It is ok, Wim will not be made to wear a dunce hat at the next ASC or anything like that. Criminology would be better off if we all were more like Kim and more like Wim.
One thing though is that I agree with Andrew Gelman, and that it is OK to do a blog post if you find errors before going to the author directly. Most academics don’t respond to critiques at all (or make superficial excuses). So if you find error in my work go ahead and blog it or write to the editor or whatever. I am guessing it worked out here because I imagine Kim and Wim have crossed paths before, and Wim actually answers his emails.
Note I think this is OK even. For example Data Colada made a dig at an author for not responding to a critique recently (see the author feedback at the bottom). If you critique my work I don’t think I’m obligated to respond. I will respond if I think it is worth my time – papers are not a contract to defend until death.
A second part I wanted to blog about was reviewing papers. You can see in my comment on Gelman’s blog, Kaiser Fung asks “What happened during the peer review process? They didn’t find any problems?”. And you can see in the original retraction watch, I think Kim did his due diligence in the original review. It was only after it was published and he more seriously pursued a replication analysis (which is beyond what is typically expected in peer review), did he find inconsistencies that clearly invalidated the meta analysis.
It is hard reviewing papers to find really widespread problems with an empirical analysis. Personally I do small checks, think of these as audits, that are not exhaustive but I often do find errors. For meta-analysis things I have done are pull out 1/2/3 studies, and see if I can replicate the point effects the authors report. One example I realized in doing this for example is that the Braga meta analysis of hot spots uses the largest point effect for some tables, which I think is probably a mistake and they should just pool all of the effects reported (although the variants I have reviewed have calculated them correctly).
Besides this for meta-analysis I do not have much advice. I have at times noted papers missing, but that was because I was just familiar with them, not because I replicated the authors search strategy. And I have advocated sharing data and code in reviews (which should clearly be done in meta-analysis), but pretty much no one does this.
For not meta analysis, one thing I do is if people have inline statistics (often things like F-tests or Chi-Square tests), I try to replicate these. Looking at regression coefficients it may be simpler to see a misprint, but I don’t have Chi-square committed to memory. I can’t remember a time I was actually able to replicate one of these, reviewed a paper one time with almost 100 inline stats like this and I couldn’t figure out a single one! It is actually somewhat common in crim articles for regression to online print the point effects and p-values, which is more difficult to check for inconsistencies without the standard errors. (You should IMO always publish standard errors, to allow readers to do their own tests by eye.)
Even if one did provide code/data, I don’t think I would spend the time to replicate the tables as a reviewer – it is just too much work. I think journals should hire data/fact checkers to do this (an actual argument for paid for journals to add real value). I only spend around 3-8 hours per review I think – this is not enough time for me to dig into code, putz with it to run on my local machine, and cross reference the results. That would be more like 2~4 days work in many cases I think. (And that is just using the original data, verifying the original data collection in a meta-analysis would be even more work.)
Jenn Reingle Gonzalez
/ November 9, 2021Hi Andy! You’re getting big time now!
Great post.
I have done a few meta-analyses myself, and so i get to review them alot. Unfortunately, most people use this software by Biostat called Comprehensive Meta-Analysis. Not to be rude, but its ‘meta-analysis for dummies’. You can translate anything into an ‘effect size’ without knowing what you are actually calculating, and it does a lot of things behind the scenes that you as the analyst would otherwise be entirely unaware of (like weighting according to the N) when calculating the pooled effect size.
it’s nice to know that some people like Kim/Wim have handled this situation appropriately, unlike some others recently. 🙂
apwheele
/ November 9, 2021Thank you Jenn. I am hoping advocating for more open analyses will help to catch more errors (and meta-analysis is clearly a place that can easily be more open, no private information).