The Big 2020 Homicide Increase in Context

Jeff Asher recently wrote about the likely 2020 increase in Homicides, stating this is an unprecedented increase. (To be clear, this is 2020 data! Homicide reporting data in the US is just a few months shy of a full year behind.)

In the past folks have found me obnoxious, as I often point to how homicide rates (even for fairly large cities), are volatile (Wheeler & Kovandzic, 2018). Here is an example of how I thought the media coverage of the 2015/16 homicide increase was overblown.

I actually later quantified this more formally with then students Haneul Yim and Jordan Riddell (Yim et al., 2020). We found the 2015 increase was akin to when folks on the news say a 1 in 100 year flood. So I was wrong in terms of it was a fairly substantive increase relative to typical year to year changes. Using the same methods, I updated the charts to 2020 data, and 2020 is obviously a much larger increase than the ARIMA model we fit based on historical data would expect:

Looking at historical data, people often argue “it isn’t as high as the early 90’s” – this is not the point though I really intended to make (it is kind of silly to make a normative argument about the right or acceptable number of homicides) – but I can see how I conflated those arguments. Looking at the past is also about understanding the historical volatility (what is the typical year to year change). Here this is clearly a much larger swing (up or down) than we would expect based on the series where we have decent US coverage (going back to 1960).

For thinking about crime spikes, I often come from a place in my crime analyst days where I commonly was posed with ‘crime is on the rise’ fear in the news, and needed to debunk it (so I could get back to doing analysis on actual crime problems, not imaginary ones). One example was a convenience store was robbed twice in the span of 3 days, and of course the local paper runs a story crime is on the rise. So I go and show the historical crime trends to the Chief and the Mayor that no, commercial robberies are flat. And even for that scenario there were other gas stations that had more robberies in toto when looking at the data in the past few years. So when the community police officer went to talk to that convenience store owner to simply lock up his cash in more regular increments, I told that officer to also go to other stores and give similar target hardening advice.

Another aspect of crime trends is not only whether a spike is abnormal (or whether we actually have an upward trend), but what causes it. I am going to punt on that – in short it is basically impossible in normal times to know what caused short term spikes absent identifying specific criminal groups (which is not so relevant for nationwide spikes, but even in large cities one active criminal group can cause observable spikes). We have quite a bit of crazy going on at the moment – Covid, BLM riots, depolicing – I don’t know what caused the increase and I doubt we will ever have a real firm answer. We cannot run an experiment to see why these increases occurred – it is mostly political punditry pinning it on one theory versus another.

For the minor bit it is worth – the time series methods I use here signal that the homicide series is ARIMA(1,1,0) – which means both an integrated random walk component and a auto-regressive component. Random walks will occur in macro level data in which the micro level data are a bunch of AR components. So this suggests a potential causal attribution to increased homicides is homicides itself (crime begets more crime). And this can cause run away effects of long upwards/downwards trends. I don’t know of a clear way though to validate that theory, nor any obvious utility in terms of saying what we should do to prevent increases in homicides or stop any current trends. Even if we have national trends, any intervention I would give a thumbs up to is likely to be local to a particular municipality. (Thomas Abt’s Bleeding Out is about the best overview of potential interventions that I mostly agree with.)


  • Wheeler, A. P., & Kovandzic, T. V. (2018). Monitoring volatile homicide trends across US cities. Homicide Studies, 22(2), 119-144.
  • Yim, H. N., Riddell, J. R., & Wheeler, A. P. (2020). Is the recent increase in national homicide abnormal? Testing the application of fan charts in monitoring national homicide trends over time. Journal of Criminal Justice, 66, 101656.

Notes for the 2019/2020 updated homicide data. 2019 data is available from the FBI page, 2020 homicide data I have taken from estimates at USA Facts and total USA pop is taken from Google search results.

R code and data to replicate the chart can be downloaded here.

KDE plots for predicted probabilities in python

So I have previously written about two plots post binary prediction models – calibration plots and ROC curves. One addition to these I am going to show are kernel density estimate plots, broken down by the observed value vs predicted value. One thing in particular I wanted to make these for is to showcase the distribution of the predicted probabilities themselves, which can be read off of the calibration chart, but is not as easy.

I have written about this some before – transforming KDE estimates from logistic to probability scale in R. I will be showing some of these plots in python using the seaborn library. It will be easier instead of transforming the KDE to use edge weighting statistics to get unbiased estimates near the borders for the way the seaborn library is set up.

To follow along, you can download the data I will be using here. It is the predicted probabilities from the test set in the calibration plot blog post, predicting recidivism using several different models.

First to start, I load my python libraries and set my matplotlib theme (which is also inherited by seaborn charts).

Then I load in my data. To make it easier I am just working with the test set and several predicted probabilities from different models.

import pandas as pd
from scipy.stats import norm
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns

# My theme

andy_theme = {'axes.grid': True,
              'grid.linestyle': '--',
              'legend.framealpha': 1,
              'legend.facecolor': 'white',
              'legend.shadow': True,
              'legend.fontsize': 14,
              'legend.title_fontsize': 16,
              'xtick.labelsize': 14,
              'ytick.labelsize': 14,
              'axes.labelsize': 16,
              'axes.titlesize': 20,
              'figure.dpi': 100}


And here I am reading in the data (just have the CSV file in my directory where I started python).

# Reading in the data with predicted probabilites
# Test from

pp_data = pd.read_csv(r'PredProbs_TestCompas.csv',index_col=0)


So you can see this data has the observed outcome Recid30 – recidivism after 30 days (although again this is the test dataset). And then it also has the predicted probability for three different models (XGBoost, RandomForest, and Logit), and then demographic breakdowns for sex and race.

The plot I am interested in seeing is a KDE estimate for the probabilities, broken down by the observed 0/1 for recidivism. Here is the default graph using seaborn:

# Original KDE plot by 0/1
sns.kdeplot(data=pp_data, x="Logit", hue="Recid30", 
            common_norm=False, bw_method=0.15)

One problem you can see with this plot though is that the KDE estimates are smoothed beyond the data. You cannot have a predicted probability below 0 or above 1. Because we are using a gaussian kernel, we can just reweight observations that are close to the edge, and then clip the KDE estimate. So a predicted probability of 0 would get a weight of 1/0.5 – so it gets double the weight. Note to do this correctly, you need to set the bandwidth the same for the seaborn kdeplot as well as the weights calculation – here 0.15.

# Weighting and clipping
# Amount of density below 0 & above 1
below0 = norm.cdf(x=0,loc=pp_data['Logit'],scale=0.15)
above1 = 1- norm.cdf(x=1,loc=pp_data['Logit'],scale=0.15)
pp_data['edgeweight'] = 1/ (1 - below0 - above1)

sns.kdeplot(data=pp_data, x="Logit", hue="Recid30", 
            common_norm=False, bw_method=0.15,
            clip=(0,1), weights='edgeweight')

This results in quite a dramatic difference, showing the model does a bit better than the original graph. The 0’s were well discriminated, so have many very low probabilities that were smoothed outside the legitimate range.

Another cool plot you can do that again shows calibration is to use seaborn’s fill option:

cum_plot = sns.kdeplot(data=pp_data, x="Logit", hue="Recid30", 
                       common_norm=False, bw_method=0.15,
                       clip=(0,1), weights='edgeweight', 
                       multiple="fill", legend=True)
cum_plot.legend_._set_loc(4) #via

As expected this shows an approximate straight line in the graph, e.g. 0.2 on the X axis should be around 0.2 for the orange area in the chart.

Next seaborn has another good function here, violin plots. Unfortunately you cannot pass a weight function here. But another option is to simply resample your data a large number of times, using the weights you provided earlier.

n = 1000000 #larger n will result in more accurate KDE
resamp_pp = pp_data.sample(n=n,replace=True, weights='edgeweight',random_state=10)

viol_sex = sns.violinplot(x="Sex", y="XGB", hue="Recid30",
                          data=resamp_pp, split=True, cut=0, 
                          bw=0.15, inner=None,
                          scale='count', scale_hue=False)
viol_sex.legend_.set_bbox_to_anchor((0.65, 0.95))

So here you can see we have more males in the sample, and they have a larger high risk blob that was correctly identified. Females have a risk profile more spread out, although there is a small clump of basically 0 risk that the model identifies.

You can also generate the graph so the areas for the violin KDE’s are normalized, so in both the original and resampled data we have fewer females, and more black individuals.

# Values for Sex for orig/resampled

# Values for Race orig/resampled

But if we set scale='area' in the chart the violins are the same size:

viol_race = sns.violinplot(x="Race", y="XGB", hue="Recid30",
                           data=resamp_pp, split=True, cut=0, 
                           bw=0.15, inner=None,
                           scale='area', scale_hue=True)
viol_race.legend_.set_bbox_to_anchor((0.81, 0.95))

I will have to see if I can make some time to contribute to seaborn to make it so you can pass in weights to the violinplot function.

ROC and calibration plots for binary predictions in python

When doing binary prediction models, there are really two plots I want to see. One is the ROC curve (and associated area under the curve stat), and the other is a calibration plot. I have written a few helper functions to make these plots for multiple models and multiple subgroups, so figured I would share, binary plots python code. To illustrate their use, I will use the same Compas recidivism data I have used in the past, (CSV file here). So once you have downloaded those two files you can follow along with my subsequent code.

Front Prep

First, I have downloaded the file and the PreppedCompas.csv file to a particular folder on my machine, D:\Dropbox\Dropbox\Documents\BLOG\binary_plots. To import these functions, I append that path using sys, and change the working directory using os. The other packages are what I will be using the fit the models.

# Front end prep

import pandas as pd
import numpy as np
from xgboost import XGBClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression

import os
import sys

my_dir = r'D:\Dropbox\Dropbox\Documents\BLOG\binary_plots'

# Can append to path
import binary_plots

np.random.seed(10) #setting the seed for the random
# split for train/test

Next up I prepare the data, this is just boring stuff turning categorical variables into various dummies and making a train/test split for the data (which can be done in a variety of ways).

# Prepping Compas Data

#For notes on data source, check out 
recid = pd.read_csv('PreppedCompas.csv')

#Preparing the variables I want
recid_prep = recid[['Recid30','CompScore.1','CompScore.2','CompScore.3',
recid_prep['Male'] = 1*(recid['sex'] == "Male")
recid_prep['Fel'] = 1*(recid['c_charge_degree'] == "F")
recid_prep['Mis'] = 1*(recid['c_charge_degree'] == "M")

print( recid['race'].value_counts() )
dum_race = pd.get_dummies(recid['race'])
# White for reference category
for d in list(dum_race):
    if d != 'Caucasion':
        recid_prep[d] = dum_race[d]

print( recid['marital_status'].value_counts() )
dum_mar = pd.get_dummies(recid['marital_status'])
recid_prep['Single'] = dum_mar['Single']
recid_prep['Married'] = dum_mar['Married'] + dum_mar['Significant Other']
# reference category is separated/unknown/widowed

#Now generating train and test set
recid_prep['Train'] = np.random.binomial(1,0.75,len(recid_prep))
recid_train = recid_prep[recid_prep['Train'] == 1].copy()
recid_test = recid_prep[recid_prep['Train'] == 0].copy()

#Independant variables
ind_vars = ['CompScore.1','CompScore.2','CompScore.3',
            'African-American','Asian','Hispanic','Native American','Other',

# Dependent variable
y_var = 'Recid30'

Next, the sklearn library makes it quite easy to fit a set of multiple models. Most of the time I start with XGBoost, random forests, and a normal logistic model with no coefficient penalty. I just stuff the base model object in a dictionary, pipe in the same training data, and fit the models. Then I can add in the predicted probabilities from each model into the test dataset. (These plots I show you should only show on the test dataset, of course the data will be calibrated on the training dataset.)

# Training three different models, Logit,
# Random Forest, and XGBoost

final_models = {}
final_models['XGB'] = XGBClassifier(n_estimators=100, max_depth=5)
final_models['RF'] = RandomForestClassifier(n_estimators=1000, max_depth=10, min_samples_split=50)
final_models['Logit'] = LogisticRegression(penalty='none', solver='newton-cg')

# Iterating over each model and fitting on train
for nm, mod in final_models.items():[ind_vars], recid_train[y_var])

# Adding predicted probabilities back into test dataset
for nm, mod in final_models.items():
    # Predicted probs out of sample
    recid_test[nm] =  mod.predict_proba(recid_test[ind_vars])[:,1]

This is fairly tiny data, so I don’t need to worry about how long this takes or run out of memory. I’d note you can do the same model, but different hyperparameters in this approach. Such as tinkering with the depth for tree based models is one I by default limit quite a bit.

AUC Plots

First, my goto metric to see the utility of a particular binary prediction model is the AUC stat. This has one interpretation in terms of the concordance stat, an AUC of 0.7 means if you randomly picked a 0 case and a 1 case, the 1 case would have a higher value 70% of the time. So AUC is all about how well your prediction discriminates between the two classes.

So with my binary_plots function, you can generate an ROC curve for the test data for a single column of predictions as so:

# A single column
binary_plots.auc_plot(recid_test, y_var, ['Logit'], save_plot='AUC1.png')

As I have generated predictions for multiple models, I have also generated a similar graph, but stuff the AUC stats in the matplotlib legend:

# Multiple columns to show different models
pred_prob_cols = list(final_models.keys()) #variable names
binary_plots.auc_plot(recid_test, y_var, pred_prob_cols, save_plot='AUC2.png')

It is also the case you want to do these plots for different subgroups of data. In recidivism research, we are often interested in sex and racial breakdowns. Here is the Logit model AUC broken down by Males (1) and Females (0).

# By subgroups in the data
binary_plots.auc_plot_long(recid_test, y_var, 'Logit', group='Male', save_plot='AUC3.png')

So this pulls the labels from the data, but you can pass in strings to get nicer labels. And finally, I show how to put both of these together, both by models and by subgroups in the data. Subgroups are different panels, and you can pass in a fontsize moniker to make the legends smaller for each subplot, and a size for each subplot (they are squares).

# Lets make nicer variable names for Male/Females and Racial Groups
recid_test['Sex'] = recid_test['Male'].replace({0: 'Female', 1:'Male'})
recid_test['Race'] = recid[recid_prep['Train'] == 0]['race']
recid_test['Race'] = recid_test['Race'].replace({'Hispanic': 'Other', 'Asian':'Other', 'Native American':'Other', 'African-American':'Black', 'Caucasian':'White'})

# Now can do AUC plot by gender and model type
binary_plots.auc_plot_wide_group(recid_test, y_var, pred_prob_cols, 'Sex', size=4, leg_size='x-small', save_plot='AUC4.png')

The plots have a wrap function (default wrap at 3 columns), so you can plot as many subgroups as you want. Here is an example combing the sex and race categories:

One limitation to note in these plots, ROC plots are normalized in a way that the thresholds for each subgroup may not be at the same area of the plot (e.g. a FPR of 0.1 for one subgroup implies a predicted probability of 30%, whereas for another subgroup it implies a predicted probability of 40%).

ROC/AUC is definitely not a perfect stat, most of the time we are only interested in the far left hand side of the ROC curve (how well we can identify high risk cases without a ton of false positives). That is why I think drawing the curves are important – one model may have a higher AUC, but it is in an area of the curve not relevant for how you will use the predictions in practice. (For tree based models with limited depth and limited variables, it can produce flat areas in the ROC curve for example.)

But I find the ROC curve/AUC metric the most useful default for both absolute comparisons (how well is this model doing overall), as well as relative model comparisons (is Model A better than Model B).

Most models I work with I can get an AUC of 0.7 without much work, and once I get an AUC of 0.9 I am in the clearly diminishing returns category to tinkering with my model (this is true for both criminology related models I work with, as well as healthcare related models in my new job).

This is of course data dependent, and even an AUC of 0.9 is not necessarily good enough to use in practice (you need to do a cost-benefit type analysis given how you will use the predictions to figure that out).

Calibration Charts

For those with a stat background, these calibration charts I show are a graphical equivalent of the Hosmer-Lemeshow test. I don’t bother conducting the Chi-square test, but visually I find them informative to not only see if an individual model is calibrated, but also to see the range of the predictions (my experience XGBoost will be more aggressive in the range of predicted probabilities, but is not always well calibrated).

So we have the same three types of set ups as with the ROC plots, a single predicted model:

# For single model
binary_plots.cal_data('XGB', y_var, recid_test, bins=60, plot=True, save_plot='Cal1.png')

For multiple models, I always do these on separate subplots, they would be too busy to superimpose. And because it is a single legend, I just build the data and use seaborn to do a nice small multiple. (All of these functions return the dataframe I use to build the final plot in long format.) The original plot was slightly noisy with 60 bins, so I reduce it to 30 bins here, but it is still slightly noisy (but each model is pretty well calibrated). XGBoost has a wider range of probabilities, random forests lowest bin is around 0.1 and max is below 0.8. Logit has lower probabilities but none above 0.8.

# For multiple models
binary_plots.cal_data_wide(pred_prob_cols, y_var, recid_test, bins=30, plot=True, save_plot='Cal2.png')

For a single model, but by subgroups in the data. The smaller other race group is more noisy, but again each model appears to be approximately calibrated.

# For subgroups and one model
binary_plots.cal_data_group('XGB', y_var, 'Race', recid_test, bins=20, plot=True, save_plot='Cal3.png')

And a combo of subgroup data and multiple models. Again the smaller subgroup Females appear more noisy, but all three models appear to be doing OK in this quick example.

# For subgroups and multiple models
binary_plots.cal_data_wide_group(pred_prob_cols, y_var, 'Sex', recid_test, bins=20, plot=True, save_plot='Cal4.png')

Sometimes people don’t bin the data (Peter Austin likes to do a smoothed plot), but I find the binned data easier to follow and identify deviations above/below predicted probabilities. In real life you often have some fallout/dropoff if there is feedback between the model and how other people respond to the model (e.g. the observed is always 5% below the predicted).

Down the rabbit hole with R functions

I had a friend the other day ask me about modifying the plot that goes with R’s boxCox function. In particular they had multiple plots, and wanted to make the Y axes consistent between the different dependent variables. So for a typical R base plot call, you can specify ylim = c(whatever_low, whatever_high), but if you look at function in the end it does not let you do this yourself (it fixes ylim based on the log-likelihood range.

# Making a second Y variable for illustration later
trees$V2 <- trees$Volume*2 + 3*rnorm(nrow(trees))

# Original function,
orig_output <- with(trees, boxCox(Volume ~ log(Height) + log(Girth), data = trees))

So if we look at the orig_output object, it gives us the x and y values for the above plot, but it does not give us the dashed line locations in the plot.

Typically here I would type out boxCox without the parenthesis at the prompt to get the function definition. That does not quite work here, as it is unhelpful and just gets us the message useMethod(boxCox). From here we can do the function method(boxCox) to help slightly more – we can see that the boxCox function really has 3 different functions, that depend on the original input.

Here we are specifying the formula interface to the function call, so lets look at getAnywhere(boxCox.formula):

Well, that is not very helpful, lets look at getAnywhere(boxCox.default) instead:

Ok, that is what we are going for. If you look into the function, at the very end you will see how it draws those dashed reference lines (anything drawn with lty = 2 in the code).

So what is happening here is that the different boxCox function calls are all daisy chained together, and it goes from formula -> lm object -> the original boxCox function. Now that we can see the function, we can make some small changes to have it return the locations of the vertical/horizontal reference lines that we want (or we could change it to accept a ylim argument directly). I name this new function

# Modifying the function to return all the info you need <- function(object, lambda = seq(-2, 2, 1/10), plotit = TRUE, interp = plotit, 
    eps = 1/50, xlab = NULL, ylab = NULL, family = "bcPower", 
    param = c("lambda", "gamma"), gamma = NULL, grid = TRUE, 
    if (class(object)[1] == "mlm") 
        stop("This function is for univariate response only")
    param <- match.arg(param)
    ylab <- if (is.null(ylab)) {
        if (family != "bcnPower") 
        else {
            if (param == "gamma") {
                expression(max(logL[gamma](lambda, gamma)))
            else {
                expression(max[lambda](logL(lambda, gamma)))
    else ylab
    xlab <- if (is.null(xlab)) {
        if (param == "lambda") 
        else expression(gamma)
    else xlab
    #fam <- matchFun(family) #Needed to change this to base function
    fam <-
    if (is.null(object$y) || is.null(object$qr)) 
        stop(paste(deparse(substitute(object)), "does not have both 'qr' and 'y' components"))
    y <- object$y
    n <- length(y)
    xqr <- object$qr
    xl <- loglik <- if (family != "bcnPower") 
    else {
        if (param == "lambda") 
        else {
            if (!is.null(gamma)) 
            else {
                p1 <- powerTransform(object, family = "bcnPower")
                gam <- p1$gamma
                se <- sd(y)
                seq(max(0.01, gam - 3 * se), gam + 3 * se, length = 100)
    m <- length(xl)
    if (family != "bcnPower") {
        for (i in 1L:m) {
            yt <- fam(y, xl[i], j = TRUE)
            loglik[i] <- -n/2 * log(sum(qr.resid(xqr, yt)^2))
    else {
        lambda.1d <- function(gamma) {
            fn <- function(lam) bcnPowerllik(NULL, y, NULL, lambda = lam, 
                gamma = gamma, xqr = xqr)$llik
            f <- optimize(f = fn, interval = c(-3, 3), maximum = TRUE)
        gamma.1d <- function(lambda) {
            fn <- function(gam) bcnPowerllik(NULL, y, NULL, lambda = lambda, 
                gamma = gam, xqr = xqr)$llik
            f <- optimize(f = fn, interval = c(0.01, max(y)), 
                maximum = TRUE)
        for (i in 1L:m) {
            loglik[i] <- if (param == "lambda") 
            else lambda.1d(loglik[i])
    if (interp) {
        sp <- spline(xl, loglik, n = 100)
        xl <- sp$x
        loglik <- sp$y
        m <- length(xl)
    if (plotit) {
        mx <- (1L:m)[loglik == max(loglik)][1L]
        Lmax <- loglik[mx]
        lim <- Lmax - qchisq(19/20, 1)/2
        # Adding in vector to contain x functions location and top line
        xF <- c()
        xT <- c()
        plot(xl, loglik, xlab = xlab, ylab = ylab, type = "n", 
            ylim = range(loglik, lim))
        if (grid) {
            grid(lty = 1, equilogs = FALSE)
        lines(xl, loglik)
        plims <- par("usr")
        abline(h = lim, lty = 2)
        y0 <- plims[3L]
        scal <- (1/10 * (plims[4L] - y0))/par("pin")[2L]
        scx <- (1/10 * (plims[2L] - plims[1L]))/par("pin")[1L]
        text(xl[1L] + scx, lim + scal, " 95%")
        la <- xl[mx]
        if (mx > 1 && mx < m) 
            segments(la, y0, la, Lmax, lty = 2)
            xF <- c(xF, la)
            xT <- c(xT, Lmax)
        ind <- range((1L:m)[loglik > lim])
        if (loglik[1L] < lim) {
            i <- ind[1L]
            x <- xl[i - 1] + ((lim - loglik[i - 1]) * (xl[i] - 
                xl[i - 1]))/(loglik[i] - loglik[i - 1])
            segments(x, y0, x, lim, lty = 2)
            xF <- c(xF, x)
            xT <- c(xT, lim)
        if (loglik[m] < lim) {
            i <- ind[2L] + 1
            x <- xl[i - 1] + ((lim - loglik[i - 1]) * (xl[i] - 
                xl[i - 1]))/(loglik[i] - loglik[i - 1])
            segments(x, y0, x, lim, lty = 2)
            xF <- c(xF, x)
            xT <- c(xT, lim)
    # See definitions of hline, vlines, vtop, ybase, just returning that info
    return(list(x = xl, y = loglik, hline = lim, vlines = xF, vtop = xT, ybase = y0))
    list(x = xl, y = loglik)

But this won’t work offhand with just calling with our same prior function calls, so we need to just entirely replace the original boxCox.default function for our daisy chain of function references to work. Here can use the assignInNamespace function to effectively overwrite the original.

# Need to do this to get it to work with lm objects

r1 <- with(trees, boxCox(Volume ~ log(Height) + log(Girth), data = trees))
r2 <- with(trees, boxCox(V2 ~ log(Height) + log(Girth), data = trees))

And now if we inspect either r1 or r2 you can see it returns the info we want.

And now we build own our set of plots. I don’t have the nice text annotations (or the default grid lines), but leave that to the reader to do that extra work.

par(mfrow=c(2,1), mai = c(1, 1, 0.2, 1))
plot(r1$x,r1$y,ylim=c(-160,-70), type='l', xaxp = c(-160,-70, 8),
# You need to specify the bottom of the segment to match your limit
abline(h = r1$hline, lty = 2)
segments(r1$vlines, -160, r1$vlines, r1$vtop, lty = 2)
plot(r2$x, r2$y,ylim=c(-160,-70), type='l', xaxp = c(-160,-70, 8),
segments(r2$vlines, -160, r2$vlines, r2$vtop, lty = 2)
abline(h = r2$hline, lty = 2)

I have done this previously for default plots in base R that I wanted to make myself in ggplot, which you could do here as well and do a facetted plot instead of the par deal with multiple rows (ggplot takes care of the spacing a bit nicer). But that is too much work for this quick tip to cajole those different data frames to do the facets for ggplot.

Updated SPSS Chart Template (V26) and Chart Notes

So I was helping someone out the other day with SPSS chart templates, and figured it would be a good opportunity to update mine. I have a new template version for V26, V26_ChartStyle.sgt. I also have some code to illustrate the template, plus a few random SPSS charting tips along the way.

For notes about chart templates, I have written about it previously, and Ruben has a nice tutorial on his site as well. For those not familiar, SPSS chart templates specify the default looks of the chart, very similar to CSS for HTML. So for example, if you want your labels to be a particular font, or you want the background for the chart to be light grey, or you want the gridlines to be dashed, are all examples you can specify in a chart template.

It is plain text XML file under the hood – unfortunately there is not any official documentation about what is valid from IBM SPSS that I am aware of, so to amend it just takes trial and error. So in case anyone from SPSS pays attention here, if you have other docs to note let me know!

Below I will walk through my updated template and various charts to illustrate the components of it.

Walkthrough Template Example

So first to start out, if you want to specify a new template, you can either do it via the GUI (Edit -> Options -> Charts Tab), or via syntax such as SET CTEMPLATE='data\template.sgt'. Here I make some fake data to illustrate various chart types.

LOOP Id = 1 TO 20.
COMPUTE Time = MOD(Id-1,10) + 1.
COMPUTE TGroup = TRUNC((Id-1)/10).
FORMATS Id Group Pair TGroup Time (F2.0) X Y (F2.1).
  1 'A' 
  2 'B'
  3 'C'
  4 'D'
  5 'E'
  6 'F'
  0 'Group One'
  1 'Group Two'
 0 'G1'
 1 'G2'

Now to start out, I show off my new color palette using a bar chart. I use a color palette derived from a Van Gogh’s bedroom as my default set. An idea I got from Sidonie Christophe (and I used this palette generator website). I also use a monospace font, Consolas, for all of the text (SPSS pulls from the system fonts, and I am a windows guy). The default font sizes are too small IMO, especially for presentations, so the X/Y axes tick labels are 14pt.

*Bar graph to show colors, title, legend.
  /GRAPHDATASET NAME="graphdataset" VARIABLES=Group MEAN(X)[name="MEAN"]
  SOURCE: s=userSource(id("graphdataset"))
  DATA: Group=col(source(s), name("Group"), unit.category())
  DATA: X=col(source(s), name("MEAN"))
  GUIDE: axis(dim(1), label("Group"))
  GUIDE: axis(dim(2), label("Mean X"))
  GUIDE: legend(aesthetic(aesthetic.color.interior), label("Group"))
  GUIDE: text.title(label("Main Title"))
  GUIDE: text.subtitle(label("Subtitle"))
  SCALE: cat(dim(1), include("1", "2", "3", "4", "5", "6"))
  SCALE: linear(dim(2), include(0))
  ELEMENT: interval(position(Group*X), shape.interior(shape.square), color.interior(Group))

The legend I would actually prefer to have an outline box, but it doesn’t behave so nicely when I use a continuous aesthetic and pads too much. I will end the blog post with things I wish I could do with templates and/or GPL. (I wished for documentation to help me with the template for legends for example, but wish I could specify the location of the legend in inline GPL code.)

The next chart shows a scatterplot. One thing I make sure in the template is to not prevent you specifying what you want in inline GPL that is possible. So for example you could specify a default scatterplot of size 12 and the inside is grey, but as far as I can tell that prevents you from changing the color later on. Also I show a trick with using the subtitle to make a Y axis label at the top of the chart, a trick I saw originally from Naomi Robbins. (She actually prefers the text orientation on the side running north/south if you do that approach, but I prevented that in this template, I really dislike having to turn my head to read it!)

* Scatterplot, showing Y axis trick with subtitle.
* Would prefer the setStyle subtype="simple" type="scatter" works as default.
  SOURCE: s=userSource(id("graphdataset"))
  DATA: X=col(source(s), name("X"))
  DATA: Y=col(source(s), name("Y"))
  GUIDE: axis(dim(1), label("X Label"))
  GUIDE: axis(dim(2))
  GUIDE: text.subtitle(label("  Y Label"))
  ELEMENT: point(position(X*Y), size(size."12"), color.interior(color."bebebe"))

The next chart shows off my default data labels. I have the labels positioned at the center point, and also inherit the color from the data element. So one trick I like to use is to use the polygon element to explicitly set the locations of labels (and you can draw the polygons transparent, so you only see the labels in the end). So if you want to put labels at the top of bars (or above a line graph) like Excel does, you can just shift them up a smidge.

*Label trick for bar graphs.
  /GRAPHDATASET NAME="graphdataset" VARIABLES=Group COUNT()[name="COUNT"]
  SOURCE: s=userSource(id("graphdataset")) 
  DATA: Group=col(source(s), name("Group"), unit.category()) 
  DATA: COUNT=col(source(s), name("COUNT")) 
  TRANS: ext = eval(COUNT + 0.2)
  GUIDE: axis(dim(1)) 
  GUIDE: axis(dim(2)) 
  GUIDE: text.subtitle(label("Count")) 
  SCALE: cat(dim(1), include("1", "2", "3", "4", "5", "6")) 
  SCALE: linear(dim(2), include(0)) 
  ELEMENT: interval(position(Group*COUNT), color.interior(color."bebebe")) 
  ELEMENT: polygon(position(Group*ext), label(COUNT), color.interior(,
                    transparency.interior(transparency."1"), transparency.exterior(transparency."1")) 

Next up is a histogram. SPSS has an annoying habit of publishing a statistics summary frame with mean/standard deviation when you make a histogram. Unfortunately the only solution I have found to this is to still generate the frame, but it is invisible so doesn’t show anything. It still takes up space in the chart area though, so the histogram will be shrunk to be somewhat smaller in width. Also I was unable to find a solution to not print the Frequency numbers with decimals here. (You can use setTickLabelFormat to say no decimals, but then that applies to all charts by default. SPSS typically chooses the numeric format from the data, but here since it calculates it inline GPL, there is no way to specify it in advance that I know of.)

* Histogram, no summary frame.
* Still not happy with Frequency numbers, stat summary is there but is empty.
  SOURCE: s=userSource(id("graphdataset"))
  DATA: Y=col(source(s), name("Y"))
  GUIDE: axis(dim(1), label("Y"))
  GUIDE: axis(dim(2), label("Frequency"))
  GUIDE: text.title(label("Simple Histogram of Y"))
  ELEMENT: interval(position(summary.count(bin.rect(Y, binWidth(0.1), binStart(-0.05) ))),
                    color.interior(color."bebebe"), color.exterior(color.white), 
                    transparency.interior(transparency."0.35"), transparency.exterior(transparency."0.35"))

The next few charts I will illustrate how SPSS pulls the axes label formats from the data itself. So first I create a few new variables for percentages, dollars, and long values with comma groupings. Additionally a new part of this template is I was able to figure out how to set the text style for small multiple panels (they are ticked like tick marks for X/Y axes). So this illustrates my favorite way to mark panels.

* Small multiple columns, illustrating different variable formats.
COMPUTE XPct = X*100.
COMPUTE YDollar = Y * 5000.
COMPUTE YComma = Y * 50000.
*Can also do PCT3.1, DOLLAR5.2, COMMA 4.1, etc.
FORMATS XPct (PCT3) YDollar (DOLLAR4) YComma (COMMA7).

  /GRAPHDATASET NAME="graphdataset" VARIABLES=XPct YDollar Pair
  SOURCE: s=userSource(id("graphdataset"))
  DATA: X=col(source(s), name("XPct"))
  DATA: Y=col(source(s), name("YDollar"))
  DATA: Pair=col(source(s), name("Pair"), unit.category())
  GUIDE: axis(dim(1), label("X Label"))
  GUIDE: axis(dim(2), label("Y Label"))
  GUIDE: axis(dim(3), opposite())
  ELEMENT: point(position(X*Y*Pair), size(size."12"), color.interior(color."bebebe"))

That is to make panels in columns, but you can also do it in rows. And again I prevent all the text I can from turning vertical up/down, and force it to write the text horizontally.

* Small multiple rows.
  /GRAPHDATASET NAME="graphdataset" VARIABLES=X YComma Pair
  SOURCE: s=userSource(id("graphdataset"))
  DATA: X=col(source(s), name("X"))
  DATA: Y=col(source(s), name("YComma"))
  DATA: Pair=col(source(s), name("Pair"), unit.category())
  GUIDE: axis(dim(1), label("X Label"))
  GUIDE: axis(dim(2), label("Y Label"))
  GUIDE: axis(dim(4), opposite())
  ELEMENT: point(position(X*Y*1*Pair))

And finally if you do wrapped panels and set the facet label to opposite, it puts the grid label on the top of the panel.

* Small multiple wrap.
  SOURCE: s=userSource(id("graphdataset"))
  DATA: X=col(source(s), name("X"))
  DATA: Y=col(source(s), name("Y"))
  DATA: Group=col(source(s), name("Group"), unit.category())
  COORD: rect(dim(1,2), wrap())
  GUIDE: axis(dim(1))
  GUIDE: axis(dim(2))
  GUIDE: axis(dim(3), opposite())
  ELEMENT: point(position(X*Y*Group))

Next up I show how my default colors do with line charts, I typically tell people to avoid yellow for lines, but this tan color does alright I think. (And Van Gogh was probably color blind, so it works out well for that.) I have not tested out printing in grey-scale, I don’t think it will work out well for that beyond just the first two colors.

* Multiple line chart (long format).
  /GRAPHDATASET NAME="graphdataset" VARIABLES=Time Y TGroup
  SOURCE: s=userSource(id("graphdataset"))
  DATA: Time=col(source(s), name("Time"))
  DATA: Y=col(source(s), name("Y"))
  DATA: TGroup=col(source(s), name("TGroup"), unit.category())
  SCALE: linear(dim(1), min(1))
  GUIDE: axis(dim(1), delta(1))
  GUIDE: axis(dim(2))
  GUIDE: legend(aesthetic(aesthetic.color.interior))
  GUIDE: legend(aesthetic(aesthetic.size), label("Size Y"))
  GUIDE: text.subtitle(label("   Y"))
  SCALE: cat(aesthetic(aesthetic.color.interior), include("0", "1"))
  ELEMENT: line(position(Time*Y), color.interior(TGroup))
  ELEMENT: point(position(Time*Y), shape(TGroup), color.interior(TGroup),
                    color.exterior(color.white), size(size."10")) )

A nice approach that forgoes the legend though with line charts is to label the ends of the lines, and I show that below. Also the data above is in long format, and when superimposing points does not quite work out perfectly (G2 should always be above G1, but sometimes the G1 point is above the G2 line). Drawing each individual line in wide format though you can prevent that from happening (but results in more work to write the GPL, need several ELEMENT statements for a single line). I also show how to use splines here, which can sometimes help disentangle the spaghetti lines, but user beware, the interpolated spline values can be misleading (one of the reasons I like superimposing the point markers as well).

* Multiple line chart with end labels (wide format).
  /LastTime = MAX(Id).
IF Id = LastTime IdLabel = Id.
FORMATS IdLabel (F2.0).

  /GRAPHDATASET NAME="graphdataset" VARIABLES=Id X Y IdLabel
  SOURCE: s=userSource(id("graphdataset"))
  DATA: Id=col(source(s), name("Id"))
  DATA: IdLabel=col(source(s), name("IdLabel")) 
  DATA: X=col(source(s), name("X"))
  DATA: Y=col(source(s), name("Y"))
  DATA: TGroup=col(source(s), name("TGroup"), unit.category())
  SCALE: linear(dim(1), min(1))
  SCALE: linear(dim(2), max(1.08))
  GUIDE: axis(dim(1), delta(1))
  GUIDE: axis(dim(2))
  SCALE: cat(aesthetic(aesthetic.color.interior), include("0", "1"))
  ELEMENT: line(position(smooth.spline(Id*Y)), color(
  ELEMENT: line(position(smooth.spline(Id*X)), color(
  ELEMENT: point(position(Id*Y), color.interior(, color.exterior(color.white),
                    size(size."9"), shape(
  ELEMENT: point(position(Id*X), color.interior(, color.exterior(color.white),
                    size(size."8"), shape(shape.square))
  ELEMENT: point(position(IdLabel*Y), color(, label("Y"), 
                    transparency.interior(transparency."1"), transparency.exterior(transparency."1"))
  ELEMENT: point(position(IdLabel*X), color(, label("X"), 
                    transparency.interior(transparency."1"), transparency.exterior(transparency."1"))

A final example I illustrate is an error bar chart, but also include a few different notes about line breaks. You can use a special \n character in labels to break lines where you want. Also had a request for labeling the ends of the chart as well. Here I fudge that look by adding in a bunch of white space. This takes trial-and-error to figure out the right number of spaces to include, and can change if the Y axes labels change length, but is the least worst way I can think to do such a task. For error bars and bar graphs, it is also often easier to generate them going vertical, and just use COORD: transpose() to make them horizontal if you want.

* Error Bar chart.
  0 'Group\nOne'
  1 'Group\nTwo'
  SOURCE: s=userSource(id("graphdataset"))
  DATA: Pair=col(source(s), name("Pair"), unit.category())
  DATA: MEAN_Y=col(source(s), name("MEAN_Y"))
  DATA: LOW=col(source(s), name("MEAN_Y_LOW"))
  DATA: HIGH=col(source(s), name("MEAN_Y_HIGH"))
  COORD: transpose()
  GUIDE: axis(dim(1))
  GUIDE: axis(dim(2), label("Low Anchor                                                    High Anchor", "\nLine 2"))
  GUIDE: text.title(label("Simple Error Bar Mean of Y by Pair\n[Transposed]"))
  GUIDE: text.footnote(label("Error Bars: 95% CI"))
  SCALE: cat(dim(1), include("0", "1"), reverse() )
  SCALE: linear(dim(2), include(0))
  ELEMENT: interval(position(region.spread.range(Pair*(LOW+HIGH))), shape.interior(shape.ibeam))
  ELEMENT: point(position(Pair*MEAN_Y), size(size."12"))

Going to back to some scatterplots, I will illustrate a continuous legend example using size of points in a scatterplot. For size elements it typically makes sense to use a square root scale instead of a linear one (SPSS default power scale is to x^0.5, so a square root). If you go to edit the chart and click on the legend, you will see here what I mean by the excessive padding of white space at the bottom. (Also wish you could control the breaks that are shown in inline GPL, breaks for non-linear scales though are no doubt tricky.)

* Continuous legend example.
  SOURCE: s=userSource(id("graphdataset"))
  DATA: X=col(source(s), name("X"))
  DATA: Y=col(source(s), name("Y"))
  GUIDE: axis(dim(1), label("X Label"))
  GUIDE: axis(dim(2))
  GUIDE: text.subtitle(label("  Y Label"))
  GUIDE: legend(aesthetic(aesthetic.size), label("SizeLab"))
  SCALE: linear(dim(2), max(1.05))
  SCALE: pow(aesthetic(aesthetic.size), aestheticMinimum(size."6px"), 
               aestheticMaximum(size."30px"), min(0), max(1))
  ELEMENT: point(position(X*Y), size(Y), color.interior(color."bebebe"))

And you can also do multiple legends as well. SPSS does a good job blending them together in this example, but to label the group you need to figure out which hierarchy wins first I guess.

* Multiple legend example.
  SOURCE: s=userSource(id("graphdataset"))
  DATA: X=col(source(s), name("X"))
  DATA: Y=col(source(s), name("Y"))
  DATA: Pair=col(source(s), name("Pair"), unit.category())
  GUIDE: axis(dim(1), label("X Label"))
  GUIDE: axis(dim(2))
  GUIDE: text.subtitle(label("  Y Label"))
  GUIDE: legend(aesthetic(aesthetic.color.interior), label("Color/Shape Lab"))
  GUIDE: legend(aesthetic(aesthetic.size), label("Size & Trans. Lab"))
  SCALE: linear(dim(2), max(1.05))
  SCALE: linear(aesthetic(aesthetic.size), aestheticMinimum(size."6px"), 
               aestheticMaximum(size."30px"), min(0), max(1))
  SCALE: linear(aesthetic(aesthetic.transparency.interior), aestheticMinimum(transparency."0"), 
               aestheticMaximum(transparency."0.6"), min(0), max(1))
  ELEMENT: point(position(X*Y), transparency.interior(Y), 
                            shape(Pair), size(Y), color.interior(Pair))

But that is not too shabby for just out of the box and no post-hoc editing.

Wishes for GPL and Templates

So I have put in my comments on things I wish I could do via chart templates. For a recap some documentation on what is possible, turning off the summary statistics for histograms entirely (not just making it invisible), padding for continuous legends, and making the setStyle defaults for the various charts styleOnly are a few examples. And then there are some parts I wish you could do in inline GPL, such as setting the location of the legend. Also I would not mind having access to any of these elements in inline GPL as well, such as say setting the number format in a GUIDE statement I think would make sense. (I have no idea how hard/easy these things though are under the hood.) But no documentation for templates is a real hassle when trying to edit your own.

Do y’all have any other suggestions for a chart template? Other default charts I should check out to see how my template does? Other suggested color scales? Let me know your thoughts!

Crime analysis dashboards in Tableau

So previously I have rewritten a few of my Crime Analysis tutorials (in Excel) to show how to use Tableau.

It takes too much work to do a nice tutorial like that with no clear end user, so I will just post some further examples I have been constructing to self-teach myself Tableau. To see my current workbook, you can download the files here.

The real benefit of Tableau over static charts in Excel (or whatever statistical program), is you can do interactive filtering and brushing/linking. So here is an example GIF showing how you can superimpose the weekly & seasonal chart I showed earlier, along with additional charts. Here instead of a dropdown to filter by different crime types, I show how you can use a Treemap as a filter. You can also select either one element or multiple elements, so first I show selecting different types of larceny (orange), then I show selecting all of the Part 2 nuisance crimes.

The Treemap idea is courtesy of Jerry Ratcliffe and Grant Drawve, and one of my co-workers used it like this in a Tableau dashboard to give me this idea. Here the different colors represent Part 2 disorder crimes (Blue), Property Crimes (orange), and Violent Crimes (Red). While you cannot see labels for each one, it does has tooltips, so in the end you can see what individual cells contain when you also consider the interactivity component.

You can mash-up additional tables, graphs, and maps as well. Here is another example using Compstat like tables for crime totals by year, a table of counts of crime per street (would prefer to do individual addresses, but the Burlington CAD data I used to illustrate does not have individual addresses) filtered to the top 30, and a point map. You can select any one graphic and it subsets the others.

While Tableau has maps I am not real bemused by them offhand. Points maps are no big deal, but with many points they become inscrutable. You can do a kernel density map, but it is very difficult to make it look reasonable depending on the filtering/zoom. If Tableau implements something like Leaflets cluster marker for point maps I think that would be a bit more friendly.

Dashboards no doubt are a trade-off with space. You can only reasonably put so much in a limited space. But brushing/linking between graphics is a really big different between Tableau and other traditional static graphics. It may not always be necessary, but it can sometimes be useful.

Next up I have a few ideas to make a predictive model monitoring dashboard in Tableau.

Weekly Error Bar Chart in Tableau

I have posted my Tableau tutorial #2 for making a weekly error bar chart in Tableau. (Tutorial #1 was for a seasonal chart.) This is replicating prior examples I provided in Excel for IACA workshops and my undergrad crime analysis course.


For a sneak peak of the end result, see here:

Making error bars in Tableau is quite a chore. One approach that people use for Excel, making a cumulative area chart, and then make the under area invisible, does not work in Tableau. Since you can interact with everything, making something that is there but invisible is not an option. You could do that approach and turn the area white, but then the gridlines or anything below that object are not visible.

So the best workaround I found here was to do discrete time, and use the reference band option in the background. This is a good example for non-normal error bars, here this is for low count Poisson data, but another use case I will have to show sometime are for proportion confidence intervals in Tableau. (This is one reason I am doing this, I need to do something similar for my work for proportions to monitor my machine learning models. No better way to teach myself than to do it myself.)

Next up I will have to show an example that illustrates the unique ability of Tableau (at least relative to Excel) – making a dashboard that has brushing/linking. Tinkering with showing that off using this same example data with a geographic map as well. My dashboards I have tried so far all tend to look not very nice though, so I will need to practice some more before I can show those off.

Filled contour plot in python

I’ve been making a chart that looks similar to this for a few different projects at work, so figured a quick blog post to show the notes of it would be useful.

So people often talk about setting a decision threshold to turn a predicted probability into a binary yes/no decision. E.g. do I do some process to this observation if the probability is 20%, 30%, 60%, etc. If you can identify the costs and benefits of making particular decisions, you can set a simple threshold to make that decision. For example, say you are sending adverts in the mail for a product. If the person buys the product, your company makes $50, and the advert only costs $1 to send. In this framework, if you have a predictive model for the probability the advert will be successful, then your decision threshold will look like this:

$50*probability - $1

So in this case you need the predicted probability to be above 2% to have an expected positive return on the investment of sending the advert. So if you have a probability of 10% for 2000 customers, you would expect to make 2000 * (50*0.1 - 1) = 8000. The probabilities you get from your predictive model can be thought of as in the long run averages. Any single advert may be a bust, but if your model is right and you send out a bunch, you should make this much money in the end. (If you have a vector of varying probabilities, in R code the estimated revenue will then look like prob <- runif(2000,0,0.1); pover <- prob > 0.02; sum( (50*prob - 1)*pover ).)

But many of the decisions I work with are not a single number in the benefits column. I am working with medical insurance claims data at HMS, and often determining models to audit those claims in some way. In this framework, it is more important to audit a high dollar claim than a lower dollar claim, even if the higher dollar value claim has a lower probability. So I have been making the subsequent filled contour plot I am going to show in the next section to illustrate this.

python contour plot

The code snippet is small enough to just copy-paste entirely. First, I generate data over a regular grid to illustrate different claim amounts and then probabilities. Then I use np.meshgrid to get the data in the right shape for the contour plot. The revenue estimates are then simply the probability times the claims amount, minus some fixed (often labor to audit the claim) cost. After that is is just idiosyncratic matplotlib code to make a nice filled contour.

# Example of making a revenue contour plot
import matplotlib.pyplot as plt
from matplotlib.ticker import StrMethodFormatter
import numpy as np

n = 500 #how small grid cells are
prob = np.linspace(0,0.5,n)
dollar = np.linspace(0,10000,n)
#np.logspace(0,np.log10(10000),n) #if you want to do logged

# Generate grid
X, Y = np.meshgrid(prob, dollar)

# Example generating revenue
fixed = 200
Rev = (Y*X) - fixed

fig, ax = plt.subplots()
CS = ax.contourf(X, Y, Rev, cmap='RdPu')
clb = fig.colorbar(CS)'Revenue') #Abit too wide'dollar') #html does not like the dollar sign
ax.set_ylabel('Claim Amount')
plt.title('Revenue Contours')
plt.annotate('Revenue subtracts $200 of fixed labor costs',
(0,0), (0, -50),
xycoords='axes fraction',
textcoords='offset points', va='top')

The color bar does nice here out of the box. Next up in my personal learning will be how to manipulate color bars a bit more. Here I may want to either use a mask to not show negative expected returns, or a diverging color scheme (e.g. blue for negative returns).

Buffers and hospital deserts with geopandas

Just a quick blog post today. As a bit of a side project at work I have been looking into medical service provider deserts. Most people simply use a geographic cutoff of say 1 mile (see Wissah et al., 2020 for example for Pharmacy deserts). Also for CJ folks, John Hipp has done some related work for parolees being nearby service providers (Hipp et al., 2009; 2011), measuring nearby as 2 miles.

So I wrote some code to calculate nice sequential buffer areas and dissolve them in geopandas. Files and code to showcase are here on GitHub. First, as an example dataset, I geocode (using the census geocoding API) CMS certified Home Healthcare facilities, so these are hospice facilities. To see a map of those facilities across the US, and you can click on the button to get info, go to here, CMS HOME FACILITY MAP. Below is a screenshot:

Next I then generate sequential buffers in kilometers of 2, 4, 8, 16, and then the leftover (just for Texas). So you can then zoom in and darker areas are at a higher risk of not having a hospice facility nearby. HOSPICE DESERT MAP

Plotting some of these in Folium were giving me fits, so I will need to familiarize myself with that more in the future. The buffers for the full US as well were giving me trouble (these just for Texas result in fairly large files, surprised Github doesn’t yell at me for them being too big).

Going forward, what I want to do is instead of relying on a fixed function of distance, is to fit a model to identify individuals probability of going to the doctor based on distance. So instead of just saying 1+ mile and you are at high risk, fit a function that defines that distance based on behavioral data (maybe using insurance claims). Also I think the distances matter quite a bit for urban/rural and car/no-car. So rural folks traveling a mile is not a big deal, since you need a car to really do anything in rural areas. But for folks in the city relying on public transportation going a mile or two is a bigger deal.

The model then would be similar to the work I did with Gio on gunshot death risk (Circo & Wheeler, 2020), although I imagine the model would spatially vary (so maybe geographically weighted regression may work out well).


A Tableau walkthough: Seasonal chart

So my workplace uses Tableau quite a bit, and I know it is becoming pretty popular for crime analysis units as well. So I was interested in trying to pick some up. It can be quite daunting though. I’ve tried to sit through a few general tutorials, but they make my head spin.

Students of mine when I teach ArcGIS have said it is so many buttons it can be overwhelming, and Tableau is much the same way. I can see the appeal of it though, in particular for analysts who exclusively use Excel. The drag/drop you can somewhat intuitively build more detailed charts that are difficult to put together in Excel. And of course out of the box it produces interactive charts you can share, which is really the kicker that differentiates Tableau from other tools.

So instead of sitting through more tutorials I figured I would just jump in and make a few interactive graphics. And along the way I will do tutorials, same as for my other crime analysis labs, for others to follow along.

And I’ve finished/posted my first tutorial, making a seasonal chart. It is too big to fit into a blog post (over 30 screenshots!). But shows how to make a monthly seasonal chart, which is a nice interactive to have for Compstat like meetings.

Here is the final interactive version, and here is a screenshot of the end result:

And you can find the full walkthrough with screenshots here:


Some Things Crime Analysts Should Consider When Using Tableau

So first, I built this using the free version of Tableau. I don’t think the free version will cut it though for most crime analysts.

One of the big things I see Tableau as being convenient is a visualization layer on top of a database. It can connect to the live database, and so automatically update. You cannot do this though with the free version. (And likely you will need some SQL chops to get views for data in formats you can’t figure out how to coerce Tableau functions.)

So if you go through the above tutorial and say that is alot of work, well it is, but you can set it up once on a live data stream, and it just works going forward.

The licensing isn’t crazy though, and if you are doing this for data that can be shared with the public, I think that can make sense for crime analysts. For detailed report info that cannot be shared with the public, it is a bit more tricky though (and I definitely cannot help with the details for doing your own on prem server).

There are other totally free interactive dashboard like options as well, such as Shiny in R, plotly libraries (in R and python), and python has a few other interactive ones as well. The hardest part really is the server portion for any of them (making it so others can see the interactive graphic). Tableau is nice and reactive though in my experience, even when hooked up to a live data stream (but not crazy big data).

I hope to expand to my example Poisson z-score charts with error bands, and then maybe see if I can build a dashboard with some good cross-linking between panes with geo data.

For this example I am almost 100% happy with the end result. One thing I would like is for the hover behavior to select the entire line (but the tooltips still be individual months). Also would like the point at the very end to be larger, and not show the label. But these are very minor things in the end.