Getting access to paywalled newspaper and journal articles

So recently several individuals have asked about obtaining articles they do not have access to that I cite in my blog posts. (Here or on the American Society of Evidence Based Policing.) This is perfectly fine, but I want to share a few tricks I have learned on accessing paywalled newspaper articles and journal articles over the years.

I currently only pay for a physical Sunday newspaper for the Raleigh News & Observer (and get the online content for free because of that). Besides that I have never paid for a newspaper article or a journal article.

Newspaper paywalls

Two techniques for dealing with newspaper paywalls. 1) Some newspapers you get a free number of articles per month. To skirt this, you can open up the article in a private/incognito window on your preferred browser (or open up the article in another browser entirely, e.g. you use Chrome most of the time, but have Firefox just for this on occasion.)

If that does not work, and you have the exact address, you can check the WayBack machine. For example, here is a search for a WaPo article I linked to in last post. This works for very recent articles, so if you can stand being a few days behind, it is often listed on the WayBack machine.

Journal paywalls

Single piece of advice here, use Google Scholar. Here for example is searching for the first Braga POP Criminology article in the last post. Google scholar will tell you if a free pre or post-print URL exists somewhere. See the PDF link on the right here. (You can click around to “All 8 Versions” below the article as well, and that will sometimes lead to other open links as well.)

Quite a few papers have PDFs available, and don’t worry if it is a pre-print, they rarely substance when going into print.1

For my personal papers, I have a google spreadsheet that lists all of the pre-print URLs (as well as the replication materials for those publications).

If those do not work, you can see if your local library has access to the journal, but that is not as likely. And I still have a Uni affiliation that I can use for this (the library and getting some software cheap are the main benefits!). But if you are at that point and need access to a paper I cite, feel free to email and ask for a copy (it is not that much work).

Most academics are happy to know you want to read their work, and so it is nice to be asked to forward a copy of their paper. So feel free to email other academics as well to ask for copies (and slip in a note for them to post their post-prints to let more people have access).

The Criminal Justician and ASEBP

If you like my blog topics, please consider joining the American Society of Evidence Based Policing. To be clear I do not get paid for referrals, I just think it is a worthwhile organization doing good work. I have started a blog series (that you need a membership for to read), and post once a month. The current articles I have written are:

So if you want to read more of my work on criminal justice topics, please join the ASEBP. And it is of course a good networking resource and training center you should be interested in as well.


  1. You can also sign up for email alerts on Google Scholar for papers if you find yourself reading a particular author quite often.↩︎

Managing R environments using conda

DataColada have a recent blog about their groundhog package, intended to aid in reproducible science. This is more from a perspective of “I have this historical code, how can I try to replicate that researchers environment to get the same results”. So more of a forensic task. What I am going to talk about in this post is to create an environment from the get-go that has the info necessary for others to replicate.

First before I get to that though, I have come across people critiquing open science using essentially ‘the perfect is the enemy of the good’ arguments. Sharing code is good, period. Even if there are different standards of replicability, some code is quite a bit better than no code. And scientists are not professional programmers – understanding all of this stuff takes time and training often in short supply in academia (hence me blogging about boring stuff like creating environments and using github). If this stuff is over your head, please feel free to email/ask a question and I can try to help.

At work I have to solve a very similar problem to scientific reproducibility; I need to write code in one environment (a dev environment, or sometimes my laptop), and then have that code run in a production environment. The way we do this at work is either via conda environments (for persistent environments) or docker images (for ephemeral environments). We currently are 100% python for machine learning, but you can also use the same workflow for R environments (or have a mashup of R/python).

Groundhog doesn’t really solve this all by itself – it doesn’t specify the version of R for example. (And there are issues with even using dates to try to forensically recreate environments, see the Hackernews thread.) But you can use conda directly to set up a reproducible environment from the get-go. Again, what is good for reproducible science is good for reproducing my work in different environments at my workplace.

I have a github folder to show the steps, but just here they are quite simple. First to start, in your project directory at the root, have two files. One is a requirements.txt file that specifies the R libraries you want. And this file may look like:

# This is the requirements.txt file
r-spatstat
r-leaflet
r-devtools
r-markdown

Conda has an annoying add r-* at the front to distinguish r packages from python ones. If there happen to be libraries you are using that are not on conda-forge (e.g. just added to CRAN, or more likely just are on github), we can solve that as well. Make a second script, here I name it packs.R, and within this R script you can install these additional packages. Here is an example installing groundhog, and my ptools package that is only on github. Each have ways you can point to a very specific version:

# This is the packs.R script
library(devtools) # for installing github packages

# Install specific commit/version from github
install_github("apwheele/ptools",ref="9826241c93e9975804430cb3d838329b86f27fd3")

# Install a specific library version from CRAN
# Specifying specific version url for cran package (not on conda-forge)
gh_url <- "https://cran.r-project.org/src/contrib/groundhog_1.5.0.tar.gz"
install.packages(gh_url,repos=NULL,type="source")

OK, so now we are ready to set up our conda environment, so from the command line (or more specifically the anaconda prompt), if you are in the root of your project, you can run something like:

conda create --name rnew
conda activate rnew
conda install -c conda-forge r-base=4.0.5 --file requirements.txt

And this installs a specific version of R, as well as those libraries in the text file. Then if you have additional libraries in the packs.R to install, you can then run:

Rscript packs.R

And conda is smart and the library defaults to installing all the R junk in the right folder (can print out .libPaths() in an R session to see where your conda environment lives). (I am more familiar with conda, so cannot comment, but likely this is exchangeable with RStudio’s renv, horses for courses.)

You may notice my requirements.txt file does not have specific versions. Often you want to be generic when you are first setting up your project, and let conda figure out the mess of version dependencies. If you want to be uber vigilant then, you can then save the exact versions of packages via overwriting your initial requirements file, something like:

conda list --export > requirements.txt

And this updated file will have everything in it, R version, conda-forge ID, etc. (although does not have the packages you installed not via conda, so still need to keep the packs.R file to be able to replicate).

I will put on the slate an example of using docker to create a totally independent environment to replicate code on. I think that is a bit over-kill for most academic projects (although is really even more isolated than this work flow). Even all this work is not 100% foolproof. conda or CRAN or the github package you installed could go away tomorrow – no guarantees in life. But again don’t let the perfect be the enemy of the good – share your scientific code, warts and all!

Bias and Transparency

Erik Loomis over at the LGM blog writes:

It’s fascinating to be doing completely unfundable research in the modern university. It means you don’t matter to administration. At all. You are completely irrelevant. You add no value. This means almost all humanities people and a good number of social scientists, though by no means all. Because universities want those corporate dollars, you are encouraged to do whatever corporations want. Bring in that money. But why would we trust any research funded by corporate dollars? The profit motive makes the research inherently questionable. Like with the racism inherent in science and technology, all researchers bring their life experiences into their research. There is no “pure” research because there are no pure people. The questions we ask are influenced by our pasts and the world in which we grew up. The questions we ask are also influenced by the needs of the funder. And if the researcher goes ahead with findings that the funder doesn’t like, they are severely disciplined. That can be not winning the grants that keep you relevant at the university. Or if you actually work for the corporation, being fired.

And even when I was an unfunded researcher at university collaborating with police departments this mostly still applied. The part about the research being quashed was not an issue for me personally, but the types of questions asked are certainly influenced. A PD is unlikely to say ‘hey, lets examine some unintended consequences of my arrest policy’ – they are much more likely to say ‘hey, can you give me an argument to hire a few more guys?’. I do know of instances of others people work being limited from dissemination – the ones I am familiar with honestly it was stupid for the agencies to not let the researchers go ahead with the work, but I digress.

So we are all biased in some ways – we might as well admit it. What to do? One of my favorite passages in relation to our inherent bias is from Denis Wood’s introduction to his dissertation (see some more backstory via John Krygier). But here are some snippets from Wood’s introduction:

There is much rodomontade in the social sciences about being objective. Such talk is especially pretentious from the mouths of those whose minds have never been sullied by even the merest passing consideration of what it is that objectivity is supposed to be. There are those who believe it to consist in using the third person, in leaning heavily on the passive voice, in referring to people by numbers or letters, in reserving one’s opinion, in avoiding evaluative adjectives or adverbs, ad nauseum. These of course are so many red herrings.

So we cannot be objective, no point denying it. But a few paragraphs later from Wood:

Yet this is no opportunity for erecting the scientific tombstone. Not quite yet. There is a pragmatic, possible, human out: Bare yourself.

Admit your attitudes, beliefs, politics, morals, opinions, enthusiasms, loves, odiums, ethics, religion, class, nationality, parentage, income, address, friends, lovers, philosophies, language, education. Unburden yourself of your secrets. Admit your sins. Let the reader decide if he would buy a used car from you, much less believe your science. Of course, since you will never become completely self-aware, no more in the subjective case than in the objective, you cannot tell your reader all. He doesn’t need it all. He needs enough. He will know.

This dissertation makes no pretense at being objective, whatever that ever was. I tell you as much as I can. I tell you as many of my beliefs as you could want to know. This is my Introduction. I tell you about this project in value-loaded terms. You will not need to ferret these out. They will hit you over the head and sock you in the stomach. Such terms, such opinions run throughout the dissertation. Then I tell you the story of this project, sort of as if you were in my – and not somebody else’s – mind. This is Part II of the dissertation. You may believe me if you wish. You may doubt every word. But I’m not conning you. Aside from the value-loaded vocabulary – when I think I’ve done something wonderful, or stupid, I don’t mind giving myself a pat on the back, or a kick in the pants. Parts I and II are what sloppy users of the English language might call “objective.” I don’t know about that. They’re conscientious, honest, rigorous, fair, ethical, responsible – to the extent, of course, that I am these things, no farther.

I think I’m pretty terrific. I tell you so. But you’ll make up your mind about me anyway. But I’m not hiding from you in the the third person passive voice – as though my science materialized out of thin air and marvelous intentions. I did these things. You know me, I’m

Denis Wood

We will never be able to scrub ourselves clean to be entirely objective – a pure researcher as Loomis puts its. But we can be transparent about the work we do, and let readers decide for themselves whether the work we bring forth is sufficient to overcome those biases or not.

Open source code projects in criminology

TLDR; please let me know about open source code related criminology projects.

As part of my work with CrimRxiv, we have started the idea of creating a page to link to various open source criminology focused projects. That is overly broad, but high level here we are thinking for pragmatic resources (e.g. code repositories/packages, open source text books), as opposed to more traditional literature.

As part of our overlay journal we are starting, D1G1TAL & C0MPUTAT10NAL CR1M1N0L0GY, we are trying to get folks to submit open source work for a paper. (As a note, this will not have any charges to publish.) The motivation is two-fold: 1) this gives a venue to get your code peer reviewed (e.g. similar to the Journal of Open Source Software). This is mainly for the writer, to give academic recognition for your open source work. 2) Is for the consumer of the information, it is a nice place to keep up on current developments. If you write an R package to do some cool analysis I want to be aware of it!

For 2, we can accomplish something similar by just linking to current projects. I have started a spreadsheet of links I am collating for now, (in the future will update to this page, you need to be signed into CrimRxiv to see that list). For examples of the work I have collated so far:

Then we have various R packages from folks floating around; Greg Ridgeway, Jerry Ratcliffe, Wouter Steenbeek (as well as the others I mentioned previously you can check out their other projects on Github). Please add in info into the google spreadsheet, comment here, or send me an email if you would like some work you have done (or know others have done) that should be added.

Again I want to know about your work!

Reproducible research and code review for journals

Recently came across two different groups broaching the subject of code reviews and reproducible research more broadly for criminal justice. There are certainly aspects of either that make it difficult in the context of peer review. But I am not one to let the perfect be the enemy of the good, so I will layout the difficulties and give some comments on potential good enough solutions that still make marked improvements on the current state of affairs in crim/cj research.

Reproducible Research

So what do I mean by reproducible research? Jeromy Anglim on crossvalidated has a good breakdown on different ways we may apply the term. So to some it may mean if you did a hot spots policing experiment, can I replicate the same crime reduction results in another city.

These are important to publish (simply because social science experiments will inevitably have quite a bit of variance), but this is often not what we are talking about when we talk about replication. We are often talking about a much smaller in scope goal – if I give you the exact same data, can you reproduce the tables/figures in the manuscript you used to make your inferences?

One problem that is often the case with CJ research is that we are working with sensitive data. If I do analysis on a survey of a sensitive topic, I often cannot share the data. But, I do not believe that should entirely put a spike in the question of reproducible data. I have broken down different levels that are possible in making research more reproducible:

  1. A Sharing data and code files to reproduce the paper results
  2. B Sharing code files and simulated data that illustrate the results
  3. C Sharing the plain-text log files showing the code and results of tables/figures

So I have not seen C proposed anywhere, but it is a dead simple solution that almost everyone should be able to accommodate. It simply involves typing log using "output.txt", text at the top of your Stata file, or OUTPUT EXPORT /PDF DOCUMENTFILE="output.pdf" at the end of your SPSS analysis (or could be done via the GUI), etc. These are the log/output files used to generate the results you report in the paper, and typically contain both the commands run, as well as the resulting tables. These files can quite easily not contain privileged information (in fact they won’t be default most of the time, unless you printed out individual names in a table for example in intermediate results).

To accomplish C does take some modicum of wherewithal in terms of writing code, but it is a pretty low bar. So I see no reason why all quantitative analyses cannot require at least this step right now. I realize it is not foolproof – a bad actor could go and edit the results (same as they could edit the results without this information). But it ups the level of effort to manipulate results by quite a bit, and more importantly has the potential to catch more mundane transcription errors that occur quite frequently.

Sometimes I want more details on the code used, the nature of the data etc. (Most quasi-experimental design for example can be summed up as shape your data in a special way and run a particular regression model.) For people like me who care about that, B helps with that, in that I can see the code front-to-back, can actually go and inspect the shape and values in a particular rectangular dataset, and see how the code interacts with those objects. The only full on example of this I am aware of is a recent example paper in Nature Behavior that shares the code using simulated data.

B is also very similar to people who release statistical packages to reproduce their code. So if you release an R package that conducts your new fancy technique, even if you can’t share your data it is really good for people to be able to view the underlying code even by itself to understand the technique better and in conjunction build on your work more. If you do a new technique, it is a crazy ton of work to replicate that on your own, so most people will not bother.

A is most of the way there to the gold standard – if you can share both the data and the code used to reproduce the analysis. Both A and B take a significant amount of knowledge of statistical programming to accomplish. Most people in our field do not have the skills to write an analysis front-to-back that can run in a series of scripts though. To get to A/B grad programs in crim/cj need to spend crazy more time on teaching these skills, which is near zero now almost across the board.

One brief thing to mention about A is that the boundary is difficult to define. So for example, I share code to reproduce analysis in my 311 and crime at micro places in DC paper (paper link, code). But this starts from a dataset that has the street units in DC and all of the covariates already compiled. But where did that dataset come from? I created it by compiling many different sources, so the base dataset is itself very difficult to replicate. Again not letting the perfect be the enemy of the good, I think just starting from your compiled dataset, and replicating the tables/graphs in the manuscript is better than letting the fuzzy boundary prevent you from sharing anything.

Code Reviews for Journal Submissions

The hardest part of A is that even after you share your data, some journals want to be able to run the code locally to entirely reproduce your results. So while I have shared data code (A above) for many papers, see this spreadsheet, they have not been externally vetted by any of those journals. This vetting is the standard in some economic journals now I believe, and would not be surprised in some poli-sci journals as well. This is a very hard problem though, and requires significant resources from both the journal and the researcher to be able to do that.

The biggest hurdle is that even if you share your data/code, your particular system may be idiosyncratic. You may have different R libraries installed than me. You may have different versions of python packages. I may have used a program on Windows to do some analysis you cannot do on a Mac. You may rely on some paid API I cannot access.

These are often solvable problems, but take quite a bit of time to work out. A comparable example to my work is when data scientists say ‘going to production’. This often involves taking some analysis I did on my local machine, and making it run autonomously on my companies servers. There are some things that make it more or less difficult than the typical academic situation, but I think it is broadly comparable. To go to production for a project will typically take me 3-6 months at 50% of my time, so maybe something like 300 hours for a lowish end estimate. And that is just the time it takes from the researchers end, from the journals end it will also take a significant amount of time to compile every ones code and verify the results.

Because of this, I don’t think the fully reproducible re-run my code and generate the exact same tables are feasible in the current way we do academic research and peer review. But again that is why I list C above – we shouldn’t let the perfect be the enemy of the good.

Validating New Empirical Techniques

The code review above is not really code review in the sense that someone looks at your code and says this is correct, it is simply just saying can I get the same results as you. You may want peer review to accomplish the task of not only saying is it reproducible, but is it valid/correct? There are a few things towards this end I would like to see more often in crim/cj. I realize we are not statistics, so cannot often ask for formal proofs. But there are simpler things we can do to verify the results. These are the responsibility of the researcher to provide, not the reviewer to script up on their own to validate someone elses work.

One, illustrate the technique using a very simplified example. So for instance, in my p-median patrol areas paper, I show an example of constructing the linear program with only four areas. You should be able to calculate what the result should be by hand, so can verify the correctness of your algorithm. This has the added benefit of being a very good pedagogical way to describe your method.

Two, illustrate the technique on a larger sample of simulated data in which you again know the correct result. For one example of this, I showed how to estimate group based trajectory models using deep learning libraries. Again your model/method should be able to recover the correct result (which you know) given the simulated fake data.

Three, validate the result using real data compared to the current standard. For crime mapping papers, this means comparing forecasts compared to RTM, or simpler regression models, or simply prior crime = future crime on out of sample data. Amazingly many machine learning papers in CJ do not do out of sample predictions. If it is an inferential procedure, comparing the results to some other status quo technique is similar, such as showing conformal prediction intervals have smaller widths (so more statistical power) than placebo results for synthetic control designs (at least for that example with state panel level crime data).

You may not have all three of these examples in any particular paper, but I think for very new techniques 1 or 2 is necessary. 3 is often a by-product on the analysis anyway. So I do not believe any of these asks are that onerous. If you have the skills to create some new technique, you should be able to accomplish 1 or 2.

I do not have any special advice in terms of the reviewers perspective. When I do code reviews at work, what we do is go line by line, and my co-workers give high level design advice. E.g. you should use a config file for this instead of defining it inline, you should turn this block into a function, you should make a class to open/close the database connections etc. The code reviews do not validate the technical correctness, so if I queried the wrong data they wouldn’t know in the code review. The proof is in the pudding so to speak, so if my results are performing really badly in the real world I know I am doing something wrong. (And the obverse, if my results are on the mark and making money I am pretty sure I did nothing terribly wrong.)

Because there are not these real world mechanisms to validate code in peer reviewed papers, my suggestions for 1/2/3 are the closest I think we can get in many circumstances. That and simply making your code available will dramatically improve the reproducibility and validity of your research compared to the current status quo in our field.

A bunch of random shout outs

Busy, busy, busy! Hopefully I will have some time in the near future to write up some more data science posts. But for now, here is a small python snippet to help you build interaction variables between two sets of numpy arrays/dataframes.

import numpy as np
def np_int(a,b):
    rows = a.shape[0]
    cols = a.shape[1]*b.shape[1]
    return np.einsum('ij,ik->ijk', a, b).reshape((rows,cols))

This works for pytorch as well (just replace np.einsum with torch.einsum). So coming up (eventually) I will illustrate encoding interaction between hidden layers in a deep learning model. But for now some quicker updates.

Shout out #1: Scott Jacques has continued to push the charge for open access to criminology journals. He has two recent posts about post-prints, and how our main journal (Criminology) has an excessive policy of not allowing authors to post post prints for over two years (whereas the majority of criminology journals allow you to post immediately).

Several aspects of open science are tricky – posting pre-prints/post-prints is not. If we can come together as a group this is an easy, no cost way to greatly improve the accessibility of our work to the greater public.

Shout out #2: The folks at Police Rewired have hosted a hackathon intended to Hack Hate. It is too late to participate, but they will be displaying the results this Sunday. I have not had the chance to participate in any code hackathons, I will need to make a concerted effort in the future to give at least one a shot. (It seems hard, how can you do any work in only a day or a week or two!? But the proof is in the pudding so to speak, I’ve have seen some pretty cool things come out of various hackathons in the past.)

Shout out #3: My workplace, HMS, is involved in a data sharing collaborative called the Digital Health DRC. They also have a hackathon coming up, but this is related to Telehealth use. The Digital Health DRC is pretty cool though, it is basically a way for HMS (and several other private sector entities) to share various datasets with researchers over the globe.

The scope of HMS’s data is somewhat outside the realm of my old stomping grounds of criminology (but not entirely, a big part of my job is identifying potentially fraudulent patterns in claims data). But for folks who have a research question that could be answered using health insurance claims data, this is a good resource to look into. (HMS has pretty good coverage of Medicare claims across the US.)

Finally, I experimented a few days on the site with hosting ads. I managed to serve up a few thousand and make 10 cents. So I will turn that off for now. I debated on putting the button for folks to donate a coffee, but even that is not necessary. (I can afford the few bucks for the domain, and I use dropbox to back up my files anyway, so hosting extra materials is not a big deal.) I rather folks just take my nerdy notes and make your own cool stuff (and share them with me!) I may need to figure out a better hosting solution for images though — google photos is continuing to give me troubles I see (so if you see an image is not coming through feel free to let me know in the comments or send me an email).

300 blog posts and public good criminology

This isn’t technically my 300th blog post, but the 300th page I’ve constructed on my blog (so e.g. it includes when I’ve made a page for a class). I’ve posted a spreadsheet of the titles and dates of the posts over time (and updating it I noticed I was at 300).

I typically get around 200~300 views per day. Most of these are probably bots, but unless say over 90% are bots this website gets way more views than the cumulative views of all my academic papers combined. Here is a screen shot of the stats wordpress gives to me. My downtick in 2019 I thought was going to spiral into very few views, but it is still holding on.

I kind of have three different types of blog posts. One are example code snippets/data analysis. Often these are things I have done multiple times, so I want to create a record for me to more easily search up later. For example making a hexbin map in ggplot, or a margins plot in Stata. I wrote a recent post because I was talking with a friend about crime weights, and I wanted an example of using regression in python and an error bar plot for my library. (Quite a few birds with that stone.)

Two are questions I repeatedly encounter by students. For example, I made a list of demographic variables I use in the census, and where to find or scrape crime generator variables. Consistently my most popular post is testing the equality of two regression coefficients.

The third are just more generic opinion pieces. For example my notes on (the now late) David Bayley’s writing on the police potential to reduce crime, or Jane Jacob’s take on neighborhoods, or that I don’t think latent trajectories are real things.

Some are multiple of these categories put together, particularly opinion pieces with example code snippets to illustrate the points I am making. Like a simulation of why I like to model individual delinquency items, or how to balance false positives in bail decisions.

On Public Good Criminology

None of these per se fit in the example framework of typical peer review output. So despite no peer review, I think things like deriving optimal treatment allocation with network spillovers, or that conformal predictions intervals for synthetic control estimates are much smaller than permutation tests are a substantive contribution to share!

So that brings me to the public good point. Most criminologists have a default of only valuing a closed peer review system. Despite my blog posts not being peer reviewed (ditto for the pre-prints I post at first), I hope folks can take the time to judge for themselves whether they are valuable or not. We would be much better off as a group if we did things like share code, share class preps, or failed projects by default.

Some of these posts I might write up if we had a short journal for our field akin to Economics Letters, but even that is a lot of work for very little value added to be frank. (If I had infinite time I also might turn my notes on Poisson/Negative Binomial regression into a little Sage green book.) Being a private sector data scientist now without the tenure boot on my neck, I don’t really have any need or desire to go through that process.

If all you value are getting the opinions of a handful of other academics than by all means keep your work close to the chest and only publish in peer reviewed journals. If you want to provide a public good though, your work actually needs to be public.

Why I publish preprints

I encourage peers to publish preprint articles — journal articles before they go through the whole peer review process and are published. It isn’t normative in our field, and I’ve gotten some pushback from colleagues, so figured I would put on paper why I think it is a good idea. In short, the benefits (increased exposure) outweigh the minimal costs of doing so.

The good — getting your work out there

The main benefit of posting preprints is to get your work more exposure. This occurs in two ways: one is that traditional peer-review work is often behind paywalls. This prevents the majority of non-academics from accessing your work. This point about paywalls applies just the same to preventing other academics from reading your work in some cases. So while the prior blog post I linked by Laura Huey notes that you can get access to some journals through your local library, it takes several steps. Adding in steps you are basically losing out on some folks who don’t want to spend the time. Even through my university it is not uncommon for me to not be able to access a journal article. I can technically take the step of getting the article through inter-library loan, but that takes more time. Time I am not going to spend unless I really want to see the contents of the article.

This I consider a minor benefit. Ultimately if you want your academic work to be more influential in the field you need to write about your work in non-academic outlets (like magazines and newspapers) and present it directly to CJ practitioner audiences. But there are a few CJ folks who read journal articles you are missing, as well as a few academics who are missing your work because of that paywall.

A bigger benefit is actually that you get your work out much quicker. The academic publishing cycle makes it impossible to publish your work in a timely fashion. If you are lucky, once your paper is finished, it will be published in six months. More realistically it will be a year before it is published online in our field (my linked article only considers when it is accepted, tack on another month or two to go through copy-editing).

Honestly, I publish preprints because I get really frustrated with waiting on peer review. No offense to my peers, but I do good work that I want others to read — I do not need a stamp from three anonymous reviewers to validate my work. I would need to do an experiment to know for sure (having a preprint might displace some views/downloads from the published version) but I believe the earlier and open versions on average doubles the amount of exposure my papers would have had compared to just publishing in traditional journals. It is likely a much different audience than traditional academic crim people, but that is a good thing.

But even without that extra exposure I would still post preprints, because it makes me happy to self-publish my work when it is at the finish line, in what can be a miserably long and very much delayed gratification process otherwise.

The potential downsides

Besides the actual time cost of posting a preprint (next section I will detail that more precisely, it isn’t much work), I will go through several common arguments why posting preprints are a bad idea. I don’t believe they carry much weight, and have not personally experienced any of them.

What if I am wrong — Typically I only post papers either when I am doing a talk, or when it is ready to go out for peer review. So I don’t encourage posting really early versions of work. While even at this stage there is never any guarantee you did not make a big mistake (I make mistakes all the time!), the sky will not fall down if you post a preprint that is wrong. Just take it down if you feel it is a net negative to the scholarly literature (which is very hard to do — the results of hypothesis tests do not make the work a net positive/negative). If you think it is good enough to send out for peer review it is definitely at the stage where you can share the preprint.

What if the content changes after peer review — My experience with peer review is mostly pedantic stuff — lit. review/framing complaints, do some robustness checks for analysis, beef up the discussion. I have never had a substantive interpretation change after peer-review. Even if you did, you can just update the preprint with the new results. While this could be bad (an early finding gets picked up that is later invalidated) this is again something very rare and a risk I am willing to take.

Note peer review is not infallible, and so hedging that peer review will catch your mistakes is mostly a false expectation. Peer review does not spin your work into gold, you have to do that yourself.

My ideas may get scooped — This I have never personally had happen to me. Posting a preprint can actually prevent this in terms of more direct plagiarism, as you have a time-stamped example of your work. In terms of someone taking your idea and rewriting it, this is a potential risk (same risk if you present at a conference) — really only applicable for folks working on secondary data analysis. Having the preprint the other person should at least cite your work, but sorry, either presenting some work or posting a preprint does not give you sole ownership of an idea.

Journals will view preprints negatively — Or journals do not allow preprints. I haven’t come across a journal in our field that forbids preprints. I’ve had one reviewer note (out of likely 100+ at this point) that the pre-print was posted as a negative (suggesting I was double publishing or plagiarizing my own work). An editor that actually reads reviews should know that is not a substantive critique. That was likely just a dinosaur reviewer that wasn’t familiar with the idea of preprints (and they gave an overall positive review in that one case, so did not get the paper axed). If you are concerned about this, just email the editor for feedback, but I’ve never had a problem from editors.

Peer reviewers will know who I am — This I admit is a known unknown. So peer review in our crim/cj journals are mostly doubly blind (most geography and statistic journals I have reviewed for are not, I know who the authors are). If you presented the work at a conference you have already given up anonymity, and also the field is small enough a good chunk of work the reviewers can guess who the author is anyway. So your anonymity is often a moot point at the peer review stage anyway.

So I don’t know how much reviewers are biased if they know who you are (it can work both ways, if you get a friend they may be more apt to give a nicer review). It likely can make a small difference at the margins, but again I personally don’t think the minor risk/cost outweighs the benefits.

These negatives are no doubt real, but again I personally find them minor enough risks to not outweigh the benefits of posting preprints.

The not hard work of actually posting preprints

All posting a preprint involves is uploading a PDF file of your work to either your website or a public hosting service. My workflow currently I have my different components of a journal article in several word documents (I don’t use LaTex very often). (Word doesn’t work so well when it has one big file, especially with many pictures.) So then I export those components to PDF files, and stitch them together using a freeware tool PDFtk. It has a GUI and command line, so I just have a bat file in my paper directory that lists something like:

pdftk.exe TitlePage.pdf MainPaper.pdf TablesGraphs.pdf Appendix.pdf cat output CombinedPaper.pdf

So just a double click to update the combined pdf when I edit the different components.

Public hosting services to post preprints I have used in the past are Academia.edu, SSRN, and SoxArXiv, although again you could just post the PDF on your webpage (and Google Scholar will eventually pick it up). I use SocArXiv now, as SSRN currently makes you sign up for an account to download PDFs (again a hurdle, the same as a going through inter-library loan). Academia.edu also makes you sign up for an account, and has weird terms of service.

Here is an example paper of mine on SocArXiv. (Note the total downloads, most of my published journal articles have fewer than half that many downloads.) SocArXiv also does not bother my co-authors to create an account when I upload a paper. If we had a more criminal justice focused depository I would use that, but SocArXiv is fine.

There are other components of open science I should write about — such as replication materials/sharing data, and open peer reviewed journals, but I will leave those to another blog post. Posting preprints takes very little extra work compared to what academics are currently doing, so I hope more people in our field start doing it.