Reproducible research and code review for journals

Recently came across two different groups broaching the subject of code reviews and reproducible research more broadly for criminal justice. There are certainly aspects of either that make it difficult in the context of peer review. But I am not one to let the perfect be the enemy of the good, so I will layout the difficulties and give some comments on potential good enough solutions that still make marked improvements on the current state of affairs in crim/cj research.

Reproducible Research

So what do I mean by reproducible research? Jeromy Anglim on crossvalidated has a good breakdown on different ways we may apply the term. So to some it may mean if you did a hot spots policing experiment, can I replicate the same crime reduction results in another city.

These are important to publish (simply because social science experiments will inevitably have quite a bit of variance), but this is often not what we are talking about when we talk about replication. We are often talking about a much smaller in scope goal – if I give you the exact same data, can you reproduce the tables/figures in the manuscript you used to make your inferences?

One problem that is often the case with CJ research is that we are working with sensitive data. If I do analysis on a survey of a sensitive topic, I often cannot share the data. But, I do not believe that should entirely put a spike in the question of reproducible data. I have broken down different levels that are possible in making research more reproducible:

  1. A Sharing data and code files to reproduce the paper results
  2. B Sharing code files and simulated data that illustrate the results
  3. C Sharing the plain-text log files showing the code and results of tables/figures

So I have not seen C proposed anywhere, but it is a dead simple solution that almost everyone should be able to accommodate. It simply involves typing log using "output.txt", text at the top of your Stata file, or OUTPUT EXPORT /PDF DOCUMENTFILE="output.pdf" at the end of your SPSS analysis (or could be done via the GUI), etc. These are the log/output files used to generate the results you report in the paper, and typically contain both the commands run, as well as the resulting tables. These files can quite easily not contain privileged information (in fact they won’t be default most of the time, unless you printed out individual names in a table for example in intermediate results).

To accomplish C does take some modicum of wherewithal in terms of writing code, but it is a pretty low bar. So I see no reason why all quantitative analyses cannot require at least this step right now. I realize it is not foolproof – a bad actor could go and edit the results (same as they could edit the results without this information). But it ups the level of effort to manipulate results by quite a bit, and more importantly has the potential to catch more mundane transcription errors that occur quite frequently.

Sometimes I want more details on the code used, the nature of the data etc. (Most quasi-experimental design for example can be summed up as shape your data in a special way and run a particular regression model.) For people like me who care about that, B helps with that, in that I can see the code front-to-back, can actually go and inspect the shape and values in a particular rectangular dataset, and see how the code interacts with those objects. The only full on example of this I am aware of is a recent example paper in Nature Behavior that shares the code using simulated data.

B is also very similar to people who release statistical packages to reproduce their code. So if you release an R package that conducts your new fancy technique, even if you can’t share your data it is really good for people to be able to view the underlying code even by itself to understand the technique better and in conjunction build on your work more. If you do a new technique, it is a crazy ton of work to replicate that on your own, so most people will not bother.

A is most of the way there to the gold standard – if you can share both the data and the code used to reproduce the analysis. Both A and B take a significant amount of knowledge of statistical programming to accomplish. Most people in our field do not have the skills to write an analysis front-to-back that can run in a series of scripts though. To get to A/B grad programs in crim/cj need to spend crazy more time on teaching these skills, which is near zero now almost across the board.

One brief thing to mention about A is that the boundary is difficult to define. So for example, I share code to reproduce analysis in my 311 and crime at micro places in DC paper (paper link, code). But this starts from a dataset that has the street units in DC and all of the covariates already compiled. But where did that dataset come from? I created it by compiling many different sources, so the base dataset is itself very difficult to replicate. Again not letting the perfect be the enemy of the good, I think just starting from your compiled dataset, and replicating the tables/graphs in the manuscript is better than letting the fuzzy boundary prevent you from sharing anything.

Code Reviews for Journal Submissions

The hardest part of A is that even after you share your data, some journals want to be able to run the code locally to entirely reproduce your results. So while I have shared data code (A above) for many papers, see this spreadsheet, they have not been externally vetted by any of those journals. This vetting is the standard in some economic journals now I believe, and would not be surprised in some poli-sci journals as well. This is a very hard problem though, and requires significant resources from both the journal and the researcher to be able to do that.

The biggest hurdle is that even if you share your data/code, your particular system may be idiosyncratic. You may have different R libraries installed than me. You may have different versions of python packages. I may have used a program on Windows to do some analysis you cannot do on a Mac. You may rely on some paid API I cannot access.

These are often solvable problems, but take quite a bit of time to work out. A comparable example to my work is when data scientists say ‘going to production’. This often involves taking some analysis I did on my local machine, and making it run autonomously on my companies servers. There are some things that make it more or less difficult than the typical academic situation, but I think it is broadly comparable. To go to production for a project will typically take me 3-6 months at 50% of my time, so maybe something like 300 hours for a lowish end estimate. And that is just the time it takes from the researchers end, from the journals end it will also take a significant amount of time to compile every ones code and verify the results.

Because of this, I don’t think the fully reproducible re-run my code and generate the exact same tables are feasible in the current way we do academic research and peer review. But again that is why I list C above – we shouldn’t let the perfect be the enemy of the good.

Validating New Empirical Techniques

The code review above is not really code review in the sense that someone looks at your code and says this is correct, it is simply just saying can I get the same results as you. You may want peer review to accomplish the task of not only saying is it reproducible, but is it valid/correct? There are a few things towards this end I would like to see more often in crim/cj. I realize we are not statistics, so cannot often ask for formal proofs. But there are simpler things we can do to verify the results. These are the responsibility of the researcher to provide, not the reviewer to script up on their own to validate someone elses work.

One, illustrate the technique using a very simplified example. So for instance, in my p-median patrol areas paper, I show an example of constructing the linear program with only four areas. You should be able to calculate what the result should be by hand, so can verify the correctness of your algorithm. This has the added benefit of being a very good pedagogical way to describe your method.

Two, illustrate the technique on a larger sample of simulated data in which you again know the correct result. For one example of this, I showed how to estimate group based trajectory models using deep learning libraries. Again your model/method should be able to recover the correct result (which you know) given the simulated fake data.

Three, validate the result using real data compared to the current standard. For crime mapping papers, this means comparing forecasts compared to RTM, or simpler regression models, or simply prior crime = future crime on out of sample data. Amazingly many machine learning papers in CJ do not do out of sample predictions. If it is an inferential procedure, comparing the results to some other status quo technique is similar, such as showing conformal prediction intervals have smaller widths (so more statistical power) than placebo results for synthetic control designs (at least for that example with state panel level crime data).

You may not have all three of these examples in any particular paper, but I think for very new techniques 1 or 2 is necessary. 3 is often a by-product on the analysis anyway. So I do not believe any of these asks are that onerous. If you have the skills to create some new technique, you should be able to accomplish 1 or 2.

I do not have any special advice in terms of the reviewers perspective. When I do code reviews at work, what we do is go line by line, and my co-workers give high level design advice. E.g. you should use a config file for this instead of defining it inline, you should turn this block into a function, you should make a class to open/close the database connections etc. The code reviews do not validate the technical correctness, so if I queried the wrong data they wouldn’t know in the code review. The proof is in the pudding so to speak, so if my results are performing really badly in the real world I know I am doing something wrong. (And the obverse, if my results are on the mark and making money I am pretty sure I did nothing terribly wrong.)

Because there are not these real world mechanisms to validate code in peer reviewed papers, my suggestions for 1/2/3 are the closest I think we can get in many circumstances. That and simply making your code available will dramatically improve the reproducibility and validity of your research compared to the current status quo in our field.

Publishing in Peer Review?

I am close, but not quite, entirely finished with my current crim/cj peer reviewed papers. Only one paper hangs on, the CCTV clearance paper (with Yeondae Jung). Rejected twice so far (once on R&R from Justice Quarterly), and has been under review in toto around a year and a half so far. It will land somewhere eventually, but who knows where at this point. (The other pre-prints I have on my CV but are not in peer review journals I am not actively seeking to publish anymore.)

Given the typical lags in the peer review process, if you look at my CV I will appear active in terms of publishing in 2020 (6 papers) and 2021 (4 papers and a book). But I have not worked on any peer review paper in earnest since I started working at HMS in December 2019, only copy-editing things I had already produced. (Which still takes a bit of work, for example my Cost of Crime hot spots paper took around 40 hours to respond to reviewers.)

At this point I am not sure if I will pursue any more peer reviewed publications directly in criminology/criminal justice. (Maybe as part of a team in giving support, but not as the lead.) Also we have discussed at my workplace pursuing publications, but that will be in healthcare related projects, not in Crim/CJ.

Part of the reason is that the time it takes to do a peer review publication is quite a bit relative to publishing a simple blog post. Take for instance my recent post on incorporating harm weights into the WDD test. I received the email question for this on Wednesday 11/18, thought about how to tackle the problem overnight, and wrote the blog post that following Thursday morning before my CrimCon presentation, (I took off work to attend the panel with no distractions). So took me around 3 hours in total. Many of my blog posts take somewhat longer, but I definitely do not take any more than 10-20 hours on an individual one (that includes the coding part, the writing part is mostly trivial).

I have attempted to guess as to the relative time it takes to do a peer reviewed publication based on my past work. I averaged around 5 publications per year, worked on average 50 hours a week while I was an academic, and spent something like I am guessing 60% to 80% (or more) of my time on peer review publications. Say I work 51 weeks a year (I definitely did not take any long vacations!, and definitely still put in my regular 50 hours over the summertime), that is 51*50=2550 hours. So that means around (2550*0.6)/5 ~ 300 or (2550*0.8)/5 ~ 400 so an estimate of 300 to 400 hours devoted to an individual peer review publication over my career. This will be high (as it absorbs things like grants I did not get), but is in the ballpark of what I would guess (I would have guessed 200+).

So this is an average. If I had recorded the time, I may have had a paper only take around 100 hours (I don’t think I could squeeze any out in less than that). I have definitely had some take over 400 hours! (My Mapping RTM using Machine Learning I easily spent over 200 hours just writing computer code, not to brag, it was mostly me being inefficient and chasing a few dead ends. But that is a normal part of the research process.)

So it is hard for me to say, OK here is a good blog post that took me 3 hours. Now I should go and spend another 300 to write a peer review publication. Some of that effort to publish in peer review journals is totally legitimate. For me to turn those blog posts into a peer review article I would need a more substantive real-life application (if not multiple real-life applications), and perhaps detailed simulations and comparisons to other techniques for the methods blog posts. But a bunch is just busy work – the front end lit review and answering petty questions from peer reviewers is a very big chunk of that 300 hours (and has very little value added).

My blog posts typically get many more views than my peer review papers do, so I have very little motivation to get the stamp of approval for peer review. So my blog posts take far less time, are more wide read, and likely more accessible than peer reviewed papers. Since I am not on the tenure track and do not get evaluated by peer reviewed publications anymore, there is not much motivation to continue them.

I do have additional ideas I would like to pursue. Fairness and efficiency in siting CCTV cameras is a big one on my mind. (I know how to do it, I just need to put in the work to do the analysis and write it up.) But again, it will likely take 300+ hours for me to finish that project. And I do not think anyone will even end up using it in the end – peer reviewed papers have very little impact on policy. So my time is probably better spent writing a few blog posts and playing video games with all the extra time.

If you are an editor reading this, I still do quite a few peer reviews (so feel free to send me those). I actually have more time to do those promptly since I am not hustling writing papers! I have actually debated on whether it is worth it to start my own peer reviewed journal, or maybe contribute to editing an already existing journals (just joined the JQC editorial board). Or maybe start writing my own crime analysis or methods text books. I think that would be a better use of my time at this point than pursuing individual publications.

Lit reviews are (almost) functionally worthless

The other day I got an email from ACJS about the most downloaded articles of the year for each of their journals. For The Journal of Criminal Justice Education it was a slightly older piece, How to write a literature review in 2012 by Andrew Denney & Richard Tewksbury, DT from here on. As you can guess by the title of my blog post, it is not my most favorite subject. I think it is actually an impossible task to give advice about how to write a literature review. The reason for this is that we have no objective standards by which to judge a literature review – whether one is good or bad is almost wholly subject to the discretion of the reader.

The DT article I don’t think per se gives bad advice. Use an outline? Golly I suggest students do that too! Be comprehensive in your lit review about covering all relevant work? Well who can argue with that!

I think an important distinction to make in the advice DT give is the distinction between functional actions and symbolic actions. Functional in this context means an action that makes the article better accomplish some specific function. So for example, if I say you should translate complicated regression models to more intuitive marginal effects to make your results more interpretable for readers, that has a clear function (improved readability).

Symbolic actions are those that are merely intended to act as a signal to the reader. So if the advice is along the lines of, you should do this to pass peer review, that is on its face symbolic. DT’s article is nearly 100% about taking symbolic actions to make peer reviewers happy. Most of the advice doesn’t actually improve the content of the manuscript (or in the most charitable interpretation how it improves the manuscript is at best implicit). In DT’s section Why is it important this focus on symbolic actions becomes pretty clear. Here is the first paragraph of that section:

Literature reviews are important for a number of reasons. Primarily, literature reviews force a writer to educate him/herself on as much information as possible pertaining to the topic chosen. This will both assist in the learning process, and it will also help make the writing as strong as possible by knowing what has/has not been both studied and established as knowledge in prior research. Second, literature reviews demonstrate to readers that the author has a firm understanding of the topic. This provides credibility to the author and integrity to the work’s overall argument. And, by reviewing and reporting on all prior literature, weaknesses and shortcomings of prior literature will become more apparent. This will not only assist in finding or arguing for the need for a particular research question to explore, but will also help in better forming the argument for why further research is needed. In this way, the literature review of a research report “foreshadows the researcher’s own study” (Berg, 2009, p. 388).

So the first argument, a lit review forces a writer to educate themselves, may offhand seem like a functional objective. It doesn’t make sense though, as lit. reviews are almost always written ex post research project. The point of writing a paper is not to educate yourself, but educate other people on your research findings. The symbolic motivation for this viewpoint becomes clear in DT’s second point, you need to demonstrate credibility to your readers. In terms of integrity if the advice in DT was ‘consider creating a pre-analysis plan’ or ‘release data and code files to replicate your results’ that would be functional advice. But no, it is important to wordsmith how smart you are so reviewers perceive your work as more credible.

Then the last point in the paragraph, articulating the need for a particular piece of research, is again a symbolic action in DT’s essay. You are arguing to peer reviewers about the need for a particular research question. I understand the spirit of this, but think back to what function does this serve? It is merely a signal to reviewers to say, given finite space in a journal, please publish my paper over some other paper, because my topic is more important.

You actually don’t need a literature review to demonstrate a topic is important and/or needed – you can typically articulate that in a sentence or two. For a paper I reviewed not too long ago on crime reductions resulting from CCTV installations in a European city, I was struck by another reviewers critique saying that the authors “never really motivate the study relative to the literature”. I don’t know about you, but the importance of that study seems pretty obvious to me. But yeah sure, go ahead and pad that citation list with a bunch of other studies looking at the same thing to make some peer reviewers happy. God forbid you simply cite a meta-analysis on prior CCTV studies and move onto better things.

What should a lit review accomplish?

So again I don’t think DT give bad advice – mostly vapid but not obviously bad. DT focus on symbolic actions in lit reviews because as lit reviews are currently performed in CJ/Crim journals, they are almost 100% symbolic. They serve almost no functional purpose other than as a signal to reviewers that you are part of the club. So DT give about the best advice possible navigating a series of arbitrary critiques with no clear standard.

As an example for this position that lit reviews accomplish practically nothing, conduct this personal experiment. The next peer review article you pick up, do not read the literature review section. Only read the abstract, and then the results and conclusion. Without having read the literature review, does this change the validity of a papers findings? It for the most part does not. People get feelings hurt by not being cited (including myself), but even if someone fails to cite some of my work that is related it pretty much never impacts the validity of that persons findings.

So DT give advice about how peer review works now. No doubt those symbolic actions are important to getting your paper published, even if they do not improve the actual quality of the manuscript in any clear way. I rather address the question about what I think a lit review should look like – not what you should do to placate three random people and the editor. So again I think the best way to think about this is via articulating specific functions a lit review accomplishes in terms of improving the manuscript.

Broadening the scope abit to consider the necessity of citations, the majority of citations in articles are perfunctory, but I don’t think people should plagiarize. So when you pull a very specific piece of information from a source, I think it is important to cite that work. Say you are using a survey instrument developed by someone else, citing the work that establishes that instruments reliability and validity, as well as the original population those measures were established on, is certainly useful information to the reader. Sources of information/measures, a recent piece saying the properties of your statistical model are I think other good examples of things to cite in your work. Unfortunately I cannot give a bright line here, I don’t cite Gauss every time I use the normal distribution. But if I am using a code library someone else developed that is important, inasmuch as that if someone wants to do a similar project they could use the same library.

In terms of discussing relevant results in prior studies, again the issue is the boundary of what is relevant is very difficult to articulate. If there is a relevant meta-analysis on a topic, it seems sufficient to me to simply state the results of the meta-analysis. Why do I think that is important though? It helps inform your priors about the current study. So if you say a meta-analysis effect size is X, and the current study has an effect size much larger, it may give you pause. It is also relevant if you are generalizing from the results of the study, it is just another piece of evidence in addition to the meta-analysis, not an island all by itself.

I am not saying discussing prior specific results are not needed entirely, but they do not need to be extensive. So if studies Z, Y, X are similar to yours but all had null results, and you think it was because the sample sizes were too small, that is relevant and useful information. (Again it changes your priors.) But it does not need to be belabored on in detail. The current standard of articulating different theoretical aspects ad-nauseum in Crim/CJ journals does not improve the quality of manuscripts. If you do a hot spots policing experiment, you do not need to review all the different minutia of general deterrence theory. Simply saying this experiment is likely to only accomplish general deterrence, not specific deterrence, seems sufficient to me personally.

When you propose a book you need to say ‘here are some relevant examples’ – I think the same idea would be sufficient for a lit review. OK here is my study, here are a few additional studies I think the reader may be interested in that are related. This accomplishes what contemporary lit reviews do in a much more efficient manner – citing more articles makes it much more difficult to pull out the really relevant related work. So admit this does not improve the quality of the current manuscript in a specific way, but helps the reader identify other sources of interest. (I as a reader typically go through the citation list and note a few articles I am interested in, this helps me accomplish that task much quicker.)

I’ve already sprinkled a few additional pieces of advice in this blog post (marginal effect estimates, pre-analysis plans, sharing data code), although you may say they don’t belong in the lit review. Whatever, those are things that actually improve either the content of the manuscript or the actual integrity of the research, not some spray paint on your flowers.

Relevant Other Work

CrimRxiv, Alt-Journal Contributions, and Mike Maltz’s Retrospective

As I’m sure followers of mine know, I am a big proponent of posting pre-prints. Spearheaded by Scott Jacques, he has started a specifically criminology focused pre-print server title CrimRxiv. It is still in beta but anyone can contribute a paper if they want.

One of the things me and Scott have been jamming about is how to leverage crimrxiv to make a journal that not only takes advantage of all the goodies on the internet, such as being able to embed interactive graphics or other rich media directly in a journal articles. But to really widen the scope of what ‘counts’ in terms of scholarly contribution. Why can’t things like a cool app, or a really good video lecture you edited, or a blog post illustrating code be put on the same level with journal articles?

Part of the reason I am writing this blog post is that I saw Michael Maltz recently publish a retrospective on his career on Academia.edu. This isn’t a typical journal article, but despite that there is no reason why you shouldn’t share such pieces. So I was able to convince Mike to post A Retrospective Look at My Professional Life to crimrxiv. When he first posted it on Academia.edu here was my response on how Mike (despite never having crossed paths) has influenced my career.


Hi Michael and thank you for sharing,

I’ve followed your work since a grad student at Albany. I initially got hooked on data viz based on Tufte’s book. When I looked for examples of criminologists discussing data viz you were the only one I found. That was sometime around 2010, so you had that chapter in the handbook of quantitative crim. You also had another article about drawing glyphs to illustrate life course transitions I was familiar with.

When I finished my classes at SUNY, I then worked at Troy as a crime analyst while finishing my dissertation. I doubt any of the coffee shops were the same from your time, but I did like walking over to Famous hotdogs for lunch every now and then.

Most of my work at the PD was making time series graphs and maps. No regression, so most of my stats training was not particularly useful. Even my mapping course I took focused on areal data analysis was not terribly relevant.

I tried to do similar projects to your glyph life-courses with interval censored crime data, but I was never really successful with that, they always ended up being too complicated with even moderately large crime datasets, see https://andrewpwheeler.com/2013/02/28/interval-graph-for-viz-temporal-overlap-in-crime-events/ and https://andrewpwheeler.com/2014/10/02/stacking-intervals/ for my attempts.

What was much more helpful was simply doing monitoring metrics over time, simple running means, and then I just inverted the PDF of the Poisson to give error bars, e.g. https://andrewpwheeler.com/2016/06/23/weekly-and-monthly-graphs-for-monitoring-crime-patterns-spss/. Then cases that were outside the error bands signified an anomalous pattern. In Troy there was an arrest of a single prolific person breaking into cars, and the trend went from a creeping 10 year high to a 10 year low instantly in those graphs.

So there again we have your work on the Poisson distribution and operations research in that JQC article. Also sometime in there I saw a comment you made on Andrew Gelman’s blog pointing to your work with error bands for BJS. Took that ‘fan chart’ idea later on and provided error bands for city level and USA level homicide trends, e.g. https://apwheele.github.io/MathPosts/FanChart_NewOrleans.html. Most of popular discussion of large scale crime trends is misguided over-interpreting short term noise in my opinion.

So all my degrees are in criminal justice, but I have been focusing more on linear programming over time borrowing from operations researchers as well, https://andrewpwheeler.com/2020/05/29/an-intro-to-linear-programming-for-criminologists/. I’ve found that taking outputs from a predictive model and then applying a decision analysis to specifically articulate strategies CJ agencies should take is much more fruitful than the typical way academic research is done.

Thank you again for sharing your story and best, Andy Wheeler

Deleted Twitter Account

I’ve decided to delete Twitter. It is for multiple reasons in the end.

Reason 1, I was definitely addicted to it. Checked it quite often during the daytime. Deleting off of my phone (and ditto for email) was a good first step, but I still checked it quite a bit when I was on my personal computer.

Reason 2 — there is a XKCD comic about staying up arguing with people on the internet. I was constantly tempted to do this on Twitter. It is never really worth it. Many of the examples that come to mind I did this — had a comment stream with Pete Kraska the other day about grant funding, and in the past Travis Pratt over pre-prints — Pete/Travis had an ounce of truth in their initial statements, but made sweeping generalizations that don’t describe the majority of people (which included me, hence my urge to respond). While they likely did not intend to say something directly about me, they did so in making general stereotyping comments.

I respect each as scholars, but they just have ill-informed opinions in those cases. You would think criminologists would be less likely to attribute the malice of a few to widespread groups of individuals, but so it goes. No doubt I have bad/wrong opinions all the time as well.

Reason 3, a former colleague the other day was upset I liked a tweet that was a critique of their work. This is just one example, but there are a million different things people could take offense to. I am not interested in even the potential of saying or doing something that would result in a sandbag onslaught I’ve seen several times on Twitter. I of course do not intentionally mean to hurt peoples feelings, but I do not feel like defending minor stuff like that either. Worrying about things like that is just not good for my mental health.

There are of course good things I will be missing out on. I initially joined Twitter to keep up on the news. Between Google Scholar and CrimPapers I can keep up on academic work. (Actual news I should definately not be getting my info from tweets!) But the biggest benefit in the end was there are several internet friends I only met on Twitter and would not have had the opportunity to meet without Twitter.

And of course it was nice to tweet a blog post and get a dozen likes (or say something snarky and get 30). So my work will have less exposure than before, but honestly it was not much to begin with. My last post had more likes (around a dozen) than referrals from Twitter (around half that!) Not like tweeting my blog posts resulted in 1000’s of views, more like a few dozen extra most of the time (and a few hundred extra in the best of times). So I will just continue to write blog posts, and they will have a few less views than before. I wish my blog had bigger reach but it is really just my place for creative output.

I encourage folks to always reach out and send me an email to keep in touch if you are one of my former Twitter friends. Academia can be a lonely place in normal times, and with isolating in the pandemic I can’t even imagine what it would be like without my family. I don’t think my time spent on Twitter was good for my personal well being though in the end, even though it did definitely help me be part of a larger community of colleagues and friends. 

Review of Trees, maps, and theorems: Effective Communication for rational minds by Jean-luc Doumont

I was recently introduced to the work of Jean-luc Doumont via Robert Kosara. So I picked up his book, Trees, maps, and theorems: Effective Communication for rational minds, and it does not disappoint.

In a nutshell, if you have read Tufte’s Visual display of quantitative information and you like it, you will like Doumont’s book as well. He persists in the same minimalist ideal as Tufte, but has advice not just about statistical graphics, but about all aspects of scientific communication; writing, presentations, and even email.

Doumont’s chapter on effective graphical displays is mainly a brief overview of Tufte’s main points for statistical graphics (also he gives some advice on pictures and icons), but otherwise the book has quite a bit of new advice. Here is a quick sampling of some of the points that most resonated with me:

The rule of three: It is very difficult to maintain any more than three items in our short term memory. While some people use the magic number 7 rule, Doumont notes this is clearly the upper limit. Doumont’s suggestion of using three (such as for subheadings in a document, or bullet points in a powerpoint presentation) also coincides with Howard Wainer’s suggestion to limit the number of significant digits in tables to three as well.

For oral presentations with slides, he suggests printing out your slides 6 to a page on a standard letter size paper. If you have a hard time reading them, the font is too small. I’m not sure if this fits inline with my suggestions for font sizes, it will take some more investigation on my part. Another piece of advice for oral presentations is that you can’t read text on slides and listen to the presenter at the same time. Those two inputs compete in our brain, as opposed to images and talking at the same time. Doumont gives the same advice as Tufte (prepare a handout), but I don’t think this is a good idea. (The handout can be distracting.) If you need people to read text, just take a break and get a sip of water. Otherwise make the text as minimal as possible.

My only real point of contention is that Doumont makes the mistake in talking about graphics that one only needs two points labeled on axes. This is not true in general, you need three. Imagine I gave you an axis:

2--?--8

For a linear scale, the missing point would be 5, but for a logarithmic scale (in base 2) the missing point would be 4. I figured this is worth pointing out as I recently reviewed a paper where a legend for a raster image (pretty sure ArcGIS was the culprit) only had the end points labeled.

Doumont also has a bunch of advice about writing that I will need to periodically reread. In general one point is that the first sentence of either a section (or paragraph) should be declarative as to the point of that section. Sometimes folks lead with fluff that is only revealed to be related to the material later on in the section.

My writing and work will definitely not live up to Doumont’s standard, but it is a goal I believe scientists should strive for.

Writing equations in Microsoft Word

A student asked me about using LaTeX the other day, and I stated that it is a bit of a hassle for journal articles in our field, so I have begun to use it less. Most of the journals in my field (criminology and criminal justice) make it difficult to turn in an article in that format. Many refuse to accept PDF articles outright, and last time I submitted a LaTeX file to JQC (a Springer journal) that would not compile I received zero help from staff over a month of emails, so I just reformatted it to a Word document anyway. Last time I submitted a LaTeX document to Criminology a reviewer said it probably had typos — without pointing out any of course. (FYI folks, besides doing the obvious and pointing out typos if they exist, my text editor has a spell checker same as Word to highlight typos.) Besides this, none of my co-workers use LaTeX, so it is a non-starter for when I am collaborating. I did my dissertation in LaTeX, and I would do that in LaTeX again, but smaller articles are not a big deal.

The main nicety of LaTeX are math equations. I don’t do too heavy of math stuff, and I have figured out the Microsoft Word equation editor enough to suit most of my needs. So here are a set of examples for many of the use cases I have needed to use in journal articles. I also have this in a Word (docx) document and a PDF for handy reference. Those have a few references I gathered from the internet, but the best IMO is this guys blog (who I think is a developer for Word) and this document authored by the same individual.

One of the things to note about the equation editor in Word is that you can type various shortcuts and then they will be automatically converted. For example, you can type \gamma, hit the space bar, and then the equation will actually change to showing the gamma symbol. So there are some similarities to LaTeX. (Another pro-tip, to start an equation in Word you can press Alt=.) In the subsequent examples I will use <space> to represent hitting the space bar, and there are other examples of using <back> (for the left arrow key) and <backspace> for the backspace button.

Greek characters, subscripts and superscripts

If you type

log<space>(\lambda) = \beta_0<space> + \beta_1<space>(X) + \beta_2<space>(X^2<space>)

you get:

Accents

For these you need to hit the space key twice, so

x\hat<space><space> = y\bar<space><space>

turns into:

Expected value and variance

For the equivalent of \mathbb in LaTeX, you can do

\doubleV<space>(X)= \doubleE<space>(X)^2<space> + \doubleE<space>(X^2<space>)

Plain text within equation

To do plain text within an equation, equivalent to \text{*} in LaTeX, you can use double quotes. (Note that you do not need a backslash before “log”.) So

Y = -1\cdot<space>log<space>("Property Crime"<space>) + (not pretty text)

looks like:

Sum and product

To get the product symbol is simply \prod<space>, and here is a more complicated example for the sum:

n^-1\cdot<space>\sum^n_(i=1)<space>x_i<space>= x\bar<space><space>

Square root

Square roots always cause me trouble for how they look and kern (both in LaTeX and Word). Here is how I would do an example of Euclidean distance,

d_ij<space>=\sqrt<space><space><back>(x_i-x_j)^2+(y_i-y_j)^2<space>

Fractions

The big (stacked) fraction is simple, but I had to search for a bit to find how to do inline fractions (what Word calls “linear”). So here back slash followed by forward slash does the inline fraction:

1/n = 1\/n

Numbering an equation

I’ve seen quite a few different hacks for numbering equations in Word. If you need to number and refer to them in text often, I would use LaTeX. But here is one way to do it in Word.

E = mc^2#(30)<enter>

produces below (is it just me or does this make the equation look different than the prior ones in Word?):

Multiple lines of equations

For a while I did not think this was possible, but I recently found examples of multiline equations (equivalent to \align in LateX). The way this works is you place a & sign before the symbols you want to line up (same as LaTeX), but for Word to split a line you use @. So if you type

\eqarray(10x&=4y@5x&=2y)\eqarray<space><backspace>

you will get:

Have any more good examples? Let me know in the comments!

The other nicety of LaTeX is formatting references — you are on your own though for that in Word though.

Sentence length in academic articles

While reviewing a paper recently it struck me that the content was (very) good, but the writing was stereotypical academic. My first impression was that this was caused by monotonously long sentences. See this advice from Gary Provost (via Francis Diebold). Part of the reason why long sentences are undesirable is not only for aesthetic reasons though — longer sentences are harder to parse, hold in memory, and subsequently understand. See Steven Pinker’s The Sense of Style writing guide for discussion.

So I did some text analysis of the sentences. To do the text analysis I used the nltk library in python, and here is the IPython notebook to replicate for yourself if you care to do so (apparently Wakari is not a thing anymore, so here is the corpus for Huck Finn and for my Small Sample paper). In the notebook I have saved two text corpuses, one my finished draft of this article. I compared the sentence length to Mark Twain’s Huckleberry Finn (text via here).

For a simple example getting started with the library, here is an example of tokenizing a string into words and sentences:

#some tests for http://www.nltk.org/, nice book to follow along
import nltk
#nltk.download('punkt') #need to download this for the English sentence tokenizer files

#this splits up punctuation
test = """At eight o'clock on Thursday morning Arthur didn't feel very good. This is a second sentence."""
tokens = nltk.word_tokenize(test)
print tokens

ts = nltk.sent_tokenize(test)
print ts

The first prints out each individual word (plus punctuation in some circumstances) and the second marks individual sentences. I have the line #nltk.download('punkt') commented out, as I downloaded it once already. (Running once in Wakari I did not need to download it again – I presume it would work similarly on your local machine.)

So what I did was transfer the PDF document I was reviewing to a text file and then clean up things like the section headers (ditto for my academic articles I compare it to). In Huckleberry I took out the table of contents and the “CHAPTER ?” parts. I also started a list of variables that were parsed as words but that I did not want to count after the sentences and words were tokenized. For example, an inline cite such as (X, 1996) would be split into 4 words with the original tokenizer, (, X, 1996 and ). The “x96” is an en-dash. Below takes those instances out.

#Get the corpus
f = open('SmallSample_Corpus.txt')
raw = f.read()

#Count number of sentences
sent_tok = nltk.sent_tokenize(raw)
ns = len(sent_tok)

#Count number of words
word_tok = nltk.word_tokenize(raw) #need to take out commas plus other stuff
NoWord = [',','(',')',':',';','.','%','\x96','{','}','[',']','!','?',"''","``"]
word_tok2 = [i for i in word_tok if i not in NoWord]
nw = len(word_tok2)

#Average Sentence length are words divided by sentences
print float(nw)/ns

There are inevitably more instances of things that shouldn’t be counted as words, but that makes the sentences longer on average. For example, I spotted a few possessive 's that were listed as different words. (The nltk library is smart and lists contractions as seperate words.)

So someone may know a better way to count the words, but all the articles should have the same biases. In my tests, here are the average number of words per sentence:

  • article I was reviewing, 28
  • my small sample article, 27
  • my working article (that has not undergone review), 25
  • Huck Finn, 20

So the pot is calling the kettle black here – my writing is not much better. I looked at the difference between an in-print article and a working draft, as responses to reviewers I bet will make the sentences longer. Hedges in statements that academics love.

Looking at the academic article histograms they are fairly symmetric, confirming my impression about monotonous sentence length. To make the histograms I used the panda’s library, which has a nice simple method.

sent_len = []
for i in sent_tok:
    sent_w1 = nltk.word_tokenize(i)
    sent_w2 = [i for i in sent_w1 if i not in NoWord]
    sent_len.append(len(sent_w2))

import pandas as pd

dfh = pd.DataFrame(sent_len)
dfh.hist(bins = 50);

Here is the histogram for my small sample paper:

And here it is for Huck Finn

(I’m not much of an exemplar for making graphs in python – forgive the laziness in the figures.) Apparently analyzing sentence length has a long history, see a paper by G. Udny Yule in 1939! From a quick perusal the long right tail is more usual for analyzing texts. The symmetry I see for this sample of academic articles is not the norm.

There could be more innocuous reasons for this. Huck Finn has dialogue with shorter sentences, and the academic articles have numbers and citations. (Although I think it is reasonable to count those things towards sentence complexity, “1” or “one” should have the same complexity.)

I will have to keep this in mind in the future (maybe I should write my articles in poem form)!

Music and distractions in the workplace

I was recently re-reading Zen and the Art of Motorcycle Maintenance, and it re-reminded me of why I do not like to listen to music in the workplace. The thesis in Pirsig’s book (in regards to listening to music) is simple; you can’t concentrate entirely on the task at hand if you have music distracting you. So those who value their work tend to not have idle distractions like music playing (and be all engrossed in their work).

I have worked in various shared workspaces (cubicles and shared offices) for quite a while now, and I do have a knack for going off into space and ignoring all of the background noise around me. But I still do not like listening to music, even though I have learned to cope with the situation. At this point I prefer the open office workspace, as there at least is no illusion of privacy. When I worked at a cubicle someone coming behind me and scaring me was basically a daily thing.

Scott Adams, the artist of the Dilbert comic, had a recent blog post saying that music is the lesser evil compared to constant distractions via the internet (email, facebook, twitter, etc.) This I can understand as well, and sometimes I turn off the wi-fi to try to get work done without distraction. I don’t see how turning on music helps, but given its prevalence it may just be differences between myself and other people. I should probably turn off the wi-fi for all but an hour in the morning and an hour in the afternoon everyday, but I’m pretty addicted to the internet at this point.

It partly depends on the task I am currently working on though how easily I am distracted. Sometimes I can get really engrossed in a particular problem and become obsessed with it to the point you could probably set the office on fire and I wouldn’t notice. For example this programming problem dominated my thoughts for around two days, and I ended up thinking of the general solution while I did not have access to the computer (while I was waiting for my car to get inspected). Most of the time though I can only give that type of concentration for an hour or two a day though, and the rest of the time I am working in a state of easy distraction.

Background music I don’t like, and other ambient noises I can manage to drown out, but background TV drives me crazy. My family was watching videos (on TV and tablets) the other day while I was reading Zen and ironically I became angry, because I was really into the book and wanted to give it my full concentration. I know people who watch TV in bed to go to sleep, and it is giving me a headache just thinking about it while I am writing this blog post.

I highly recommend both Zen and the Art of Motorcycle Maintenance and Scott Adam’s blog. I’m glad I revisited Zen, as it is an excellent philosophical book on the logic of science that did not make much of an impression on me as an undergrad, but I have a much better grasp of it after having my PhD and reading some other philosophy texts (like Popper).

Solving problems as a metaphor for scientific writing

One analogy I hear in academics describing the process of writing a literature review is identifying the gaps in prior literature(s). I was reading Helping doctoral students write: Pedagogies for supervision recently, and Kamler & Thomson used this same analogy in describing the process of writing a literature review for a dissertation (although it is generally the same for shorter articles or books). Similar terminology Kamler & Thomson describe are blank spots and blind spots (see page 45). In that same chapter since Kamler & Thomson suggest the use of appropriate metaphors in describing the work of writing a literature review, I figured a critique of this one to be apropos.

I do not think the analogy is completely off base — but I do not like it as it does not jive with my personal experience of how I go about writing an article or thinking about research more generally. The first reason I do not like this terminology is that it has negative connotations for prior research. I think of building knowledge as a more cumulative endeavour as opposed to filling in between the lines of prior research.

For an analogy, say a researcher is attempting to improve the fuel efficiency of small combustible engines. It is likely they take mostly prior engineering knowledge about combustible engines and provide some modifications to slightly improve the design. Filling a gap implies to me an explicit design flaw in prior engines, when in reality it is more likely the researcher brings new knowledge to improve the design, and only in the context of the new research is the old design potentially described as inefficient. A social science example may be evaluating the costs and benefits to a particular policy in place by a public institution. The policy may be evidence based, and so an evaluation of the policy provides new information to that agency of whether it works as intended, or more general scientific knowledge about applying that policy in a real world setting. Neither seem to me filling in a gap, more so contributing and/or refining a set of knowledge already established.

I like the metaphor of the accumulation of knowledge, like a pyramid one brick at a time, better in terms of describing what I do when I write a literature review as opposed to identifying gaps. A convenient format for a literature review is to take a historical walk through the literature, and let the chronological order of previous findings be the guide for how you write the lit. review. But that metaphor is not sufficient to me either, as it implies a very linear structure, whereas prior research strikes me as more sphere-like — there is a base to which you add but the direction of the current research is not limited by the trajectory of the prior work. (A more accurate physical analogy may be an irregular growth of cells — they may meander in any particular direction but they always need to be connected to the prior work.) The scientific writer imposes a linear structure when describing prior work, but in reality the prior literatures are not that focused on whatever particular problem the current article is trying to address.

That is why I like the simple metaphor of identifying and solving a problem as a descriptor of what I do when I write a literature review – or even more broadly about describing the decisions I make in my research agenda. There are several reasons I prefer this analogy to either the accumulation of knowledge or identifying gaps. Identifying gaps implies you can read the prior literature and the gaps will be obvious — this is not the case. The prior literature is written in a particular context – the authors cannot anticipate future conditions or how that work will potentially be applied in the future. The gap does not exist in the current or prior literatures, you as a writer/researcher make the gap. I prefer problem solving as opposed to the accumulation of knowledge because it implies the focused nature of the endeavour. You do not simply write a paper to add a linear line of prior knowledge, you use that prior knowledge to solve a particular problem you have in your current context. It is your job as a researcher to basically say how the prior knowledge helps to solve that problem, and then advance the current knowledge to solve your particular problem. (This focus on giving the writer agency seems to be in line with most of Kamler & Thomson’s advice as well.)

This is how Popper described how knowledge actually accumulates — people have problems and they try to learn how to solve them. There is no prior divine truth to which future knowledge is added. We simply have problems, and some research may show a better solution to that problem than prior knowledge (be it whether the prior knowledge is well established or simply folklore). The analogy is not perfect, as many researchers would say they do not solve problems but are simply describe reality, but is a frame of reference I find useful to describe how I approach writing, describe my research, and in particular how I approach consuming the prior literature. It shows how I take the prior work and apply it to my interest, I am not a passive reader when trying to synthesize prior work.