Creating high crime sub-tours

I was nerdsniped a bit by this paper, Targeting Knife-Enabled Homicides For Preventive Policing: A Stratified Resource Allocation Model by Vincent Hariman and Larry Sherman (HS from here on).

It in, HS attempt to define a touring schedule based on knife crime risk at the lower super output area in London. So here are the identified high risk areas:

And here are HS’s suggested hot spot tours schedule.

This is ad-hoc, but an admirable attempt to figure out a reasonable schedule. As you can see in their tables, the ‘high’ knife crime risk areas still only have a handful of homicides, so if reducing homicides is the objective, this program is a bit dead in the water (I’ve written about the lack of predictive ability of the model here).

I don’t think defining tours to visit everywhere makes sense, but I do think a somewhat smaller in scope question, how to figure out geographically informed tours for hot spot areas does. So instead of the single grid cell target ala PredPol, pick out multiple areas to visit for hot spots. (I don’t imagine the 41 LSOA areas are geographically contiguous either, e.g. it would make more sense to pick a tour for areas connected than for areas very far apart.)

Officers don’t tend to like single tiny areas either really, and I think it makes more sense to widen the scope a bit. So here is my attempt to figure those reasonable tours out.

Defining the Problem

The way I think about that problem is like this, look at the hypothetical diagram below. We have two choices for the hot spot location we are targeting, where the crime counts for locations are noted in the text label.

In the select the top hot spot (e.g. PredPol) approach, you would select the singlet grid cell in the top left, it is the highest intensity. We have another choice though, the more spread out hot spot in the lower right. Even though it is a lower density, it ends up capturing more crime overall.

I subsequently formulated an integer linear program to try to tackle the problem of finding good sub-tours through the graph that cumulatively capture more crime. So with the above graph, if I select two subtours, I get the results as (where nodes are identified by their (x,y) position):

  • ['Begin', (1, 4), 'End']
  • ['Begin', (4, 0), (4, 1), (3, 1), (3, 0), (2, 0), 'End']

So it can select singlet areas if they are islands (the (1,4) area in the top left), but will grow to wind through areas. Also note that the way I have programmed this network, it doesn’t skip the zero area (4,1) (it needs to go through at least one in the bottom right unless it doubles back on itself).

I will explain the meaning of the begin and end nodes below in my description of the linear program. It ends up being sort of a mash-up of traveling salesman type vehicle routing and min cost max flow type problems.

The Linear Program

The way I think about this problem formulation is like this: we have a directed graph, in which you can say, OK I start from location A, then can go to B, than go to C. In my set of decision variables, I have choices that look like this, where the first subscript denotes the from node, and the second subscript denotes the to node.

D_ab := node a -> node b
D_bc := node b -> node c

etc. In our subsequent linear program, the destination node is the node that we calculate our cumulative crime density statistics. So if node B had 10 crimes and 0.1 square kilometers, we would have a density of 100 crimes per square kilometer.

Now to make this formulation work, we need to add in a set of special nodes into our usual location network. These nodes I call ‘Begin’ and ‘End’ nodes (you may also call them source/sink nodes though). The begin nodes all look like this:

D_{begin},a
D_{begin},b
D_{begin},c

So you do that for every node in your network. Then you have End nodes as well, e.g.

D_a,{end}
D_b,{end}
D_c,{end}

In this formulation, since we are only concerned about the crime stats for the to node, not the from node, the Begin nodes just inherit the crime density stats from the original node data. For the end nodes though, you just set their objective value stats to zero (they are only relevant to define the constraints).

Now here is my linear program formulation:

Maximize 
  Sum [ D_ij ( CrimeDensity_j - DensityPenalty_j ) ]

Subject To:

 1. Sum( D_in for each neighbor of n ) <= 1, 
      for each original node n
 2. Sum( D_in for each neighbor of n ) =  Sum( D_ni for each neighbor of n ), 
      for each original node n
 3. Sum( D_bi for each begin node ) = k routes
 4. Sum( D_ie for each end node ) = k routes
 5. Sum( D_ij + D_ji ) <= 1, for each unique i,j pair
 6. D_ij is an element of {0,1}

Constraint 1 is a flow constraint. If a node has an incoming edge set to one, it cannot have any other incoming edge set to one (so a location can only be chosen once).

Constraint 2 is a constraint that says if an incoming node is selected, one of the outgoing edges needs to be selected.

Constraints 3 & 4 determine the number of k tours/routes to choose in the end. Since the begin/end nodes are special we have k routes going out of the begin nodes, and k routes going into the end nodes.

With just these constraints, you can still get micro-cycles I found. So something like, X -> Z -> X. Constraint 5 (for only the undirected edges) prevents this from happening.

Constraint 6 is just setting the decision variables to binary 0/1. So it is a mixed integer linear program.

The final thing to note is the objective function, I have CrimeDensity_j - DensityPenalty_j, so what exactly is DensityPenalty? This is a value that penalizes visiting areas that are below this threshold. Basically the way that this works is that, the density penalty sets an approximate threshold for the minimum density a tour should contain.

I suggest a default of a predictive accuracy index of 10. Where do I get 10 you ask? Weisburd’s law of crime concentration suggests 5% of the areas should contain 50% of the crime, which is a PAI of 0.5/0.05 = 10. In my example with DC data then I just calculate the actual density of crime per unit area that corresponds to a PAI of 10.

You can adjust this though, if you prefer smaller tours of higher crime density you would up the value. If you prefer longer tours decrease it.

This is the best way I could figure out how to trade off the idea of spreading out the targeted hot spot vs selecting the best areas. If you spread out you will ultimately have a lower density. This turns it into a soft objective penalty to try to keep the selected tours at a particular density threshold (and will scoop up better tours if they are available). For awhile I tried to figure out if I could maximize the PAI metric, but it is the case with larger areas the PAI will always go down, so you need to define the objective some other way.

This formulation I only consider linked nodes (unlike the usual traveling salesman in which it is a completely linked distance graph). That makes it much more manageable. In this formulation, if you have N as the number of nodes/areas, and E is the number of directed edges between those areas, we will then have:

  • 2*N + E decision variables
  • 2 + 2*N + E/2 constraints

Generally if you are doing directly connected areas in geographic networks (i.e. contiguity connections), you will have less than 8 (typically more like an average of 6) neighbors per each area. So in the case of the 4k London lower super output areas, if I chose tours I would guess it would end up being fewer than 2*4,000 + 8*4,000 = 40,000 decision variables, and then fewer than that constraints.

Since that is puny (and I would suggest doing this at a smaller geographic resolution) I tested it out on a harder network. I used the data from my dissertation, a network of 21,506 street units (both street segments and intersections) in Washington, D.C. The contiguity I use for these micro units is based on the Voronoi tessellation, so tends to have more neighbors than you would with a strictly road based network connectivity. Still in the end it ends up being a shade fewer than 200k decision variables and 110k constraints. So is a better test for in the wild whether the problem can be feasibly solved I think.

Example with DC Data

Here I have posted the python code and data used for this analysis, I end up having a nice function that you just submit your network with the appropriate attributes and out pops the different tours.

So I end up doing examples of 4 and 8 subtours based on 2011 violent UCR crime data (agg assaults, robberies, and homicides, no rapes in the public data). I use for the penalty that PAI = 10 threshold, so it should limit tours to approximately that value. It only ends up taking 2 minutes for the model to converge for the 4 tours and less than 2.5 minutes for the 8 tours on my desktop. So it should be not a big problem to up the decision variables to more sub-areas and still be solvable in real life applications.

The area estimates are in square meters, hence the high numbers. But on the right you can see that each sub-tour has a PAI above 10.

Here is an interactive map for you to zoom into each 4 subtour example. Below is a screenshot of one of the subtours. You can see that since I have defined my connected areas in terms of Voronoi tessalations, they don’t exactly follow the street network.

For the 8 tour example, it ends up returning several zero tours, so it is not possible in this data to generate 8 sub-tours that meet that PAI >= 10 threshold.

You can see that it ends up being the tours have higher PAI values, but lower overall crime counts.

You may think, why does it not pick at least singlet areas with at least one crime? It ends up being that I weight areas here by their area (this formulation would be better with grid cells of equal area, so my objective function is technically Sum [ D_ij * w_j *( CrimeDensity_j - DensityPenalty_j ) ], where w_j is the percent of the total area (so the denominator in the PAI calculation). So it ends up picking areas that are the tiniest areas, as they result in the smallest penalty to the objective function (w_j is tiny). I think this is OK though in the end – I rather know that some of the tours are worthless.

You can also see I get one subtour that is just under the PAI 10 threshold. Again possible here, but should be only slightly below in the worst case scenario. The way the objective function works is that it is pretty tricky to pick out subtours below that PAI value but still have a positive contribution to the overall objective function.

Future Directions

The main thing I wish I could do with the current algorithm (but can’t the way the linear program is set up), is to have minimum and maximum tour area/length constraints. I think I can maybe do this by adapting this code (I’m not sure how to do the penalties/objectives though). So if others have ideas let me know!

I admit that this may be overkill, and maybe just doing more typical crime clustering algorithms may be sufficient. E.g. doing DBSCAN hot spots like I did here.

But this is my best attempt shake at the problem for now!

Using Steiner trees to select a subgraph of interest

This is just a quick blog post. A crime analyst friend the other day posed a network problem to me. They had a social network in which they had particular individuals of interest, and wanted to show just a subset of that graph that connected those key individuals. The motivation was for plotting – if you show the entire hairball it can become really difficult to uncover any relationships.

Here is an example gang network from this paper. I randomly chose 10 nodes to highlight (larger red circles), and you can see it is quite hairy. You often want to label the nodes for these types of graphs, but that becomes impossible with so many intertwined nodes.

One solution to select out a subgraph of the connected bits is to use a Steiner tree. Here is that graph after running the approximate Steiner tree algorithm in networkx (in python).

Much simpler! And much more space to put additional labels.

I’ve posted the code and data to replicate here. Initially I debated on solving this via setting up the problem as a min-cost-flow, where one of the highlighted nodes had the supply, and the other highlighted nodes had the demand. But this approximate algorithm in my few tests looks really good in selecting tiny subsets, so not much need.

A few things to note about this. It is likely for many dense networks there will be many alternative subsets that are the same size, but different nodes (e.g. you can swap out a node and have the same looking network). A better approach to see connections between interesting nodes may be a betweenness centrality metric, where you only consider the flows between the highlighted nodes.

A partial solution to that problem is to add nodes/edges back in after the Steiner tree subset. Here is an example where I add back in all first degree nodes to the red nodes of interest:

So it is still a tiny enough network to plot. This just provides a way to identify higher order nodes of interest that aren’t directly connected to those red nodes.

Histogram notes in python with pandas and matplotlib

Here are some notes (for myself!) about how to format histograms in python using pandas and matplotlib. The defaults are no doubt ugly, but here are some pointers to simple changes to formatting to make them more presentation ready.

First, here are the libraries I am going to be using.

import pandas as pd
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
from matplotlib.ticker import StrMethodFormatter
from matplotlib.ticker import FuncFormatter

Then I create some fake log-normal data and three groups of unequal size.

#generate three groups of differing size
n = 50000
group = pd.Series(np.random.choice(3, n, p=[0.6, 0.35, 0.05]))
#generate log-normal data
vals = pd.Series(np.random.lognormal(mean=(group+2)*2,sigma=1))
dat = pd.concat([group,vals], axis=1)
dat.columns = ['group','vals']

And note I change my default plot style as well. So if you are following along your plots may look slightly different than mine.

One trick I like is using groupby and describe to do a simple textual summary of groups. But I also like transposing that summary to make it a bit nicer to print out in long format. (I use spyder more frequently than notebooks, so it often cuts off the output.) I also show setting the pandas options to a print format with no decimals.

#Using describe per group
pd.set_option('display.float_format', '{:,.0f}'.format)
print( dat.groupby('group')['vals'].describe().T )

Now onto histograms. Pandas has many convenience functions for plotting, and I typically do my histograms by simply upping the default number of bins.

dat['vals'].hist(bins=100, alpha=0.8)

Well that is not helpful! So typically when I see this I do a log transform. (Although note if you are working with low count data that can have zeroes, a square root transformation may make more sense. If you have only a handful of zeroes you may just want to do something like np.log([dat['x'].clip(1)) just to make a plot on the log scale, or some other negative value to make those zeroes stand out.)

#Histogram On the log scale
dat['log_vals'] = np.log(dat['vals'])
dat['log_vals'].hist(bins=100, alpha=0.8)

Much better! It may not be obvious, but using pandas convenience plotting functions is very similar to just calling things like ax.plot or plt.scatter etc. So you can assign the plot to an axes object, and then do subsequent manipulations. (Don’t ask me when you should be putzing with axes objects vs plt objects, I’m just muddling my way through.)

So here is an example of adding in an X label and title.

#Can add in all the usual goodies
ax = dat['log_vals'].hist(bins=100, alpha=0.8)
plt.title('Histogram on Log Scale')
ax.set_xlabel('Logged Values')

Although it is hard to tell in this plot, the data are actually a mixture of three different log-normal distributions. One way to compare the distributions of different groups are by using groupby before the histogram call.

#Using groupby to superimpose histograms
dat.groupby('group')['log_vals'].hist(bins=100)

But you see here two problems, since the groups are not near the same size, some are shrunk in the plot. The second is I don’t know which group is which. To normalize the areas for each subgroup, specifying the density option is one solution. Also plotting at a higher alpha level lets you see the overlaps a bit more clearly.

dat.groupby('group')['log_vals'].hist(bins=100, alpha=0.65, density=True)

Unfortunately I keep getting an error when I specify legend=True within the hist() function, and specifying plt.legend after the call just results in an empty legend. So another option is to do a small multiple plot, by specifying a by option within the hist function (instead of groupby).

#Small multiple plot
dat['log_vals'].hist(bins=100, by=dat['group'], 
                     alpha=0.8, figsize=(8,8))

This takes up more room, so can pass in the figsize() parameter directly to expand the area of the plot. Be careful when interpreting these, as all the axes are by default not shared, so both the Y and X axes are different, making it harder to compare offhand.

Going back to the superimposed histograms, to get the legend to work correctly this is the best solution I have come up with, just simply creating different charts in a loop based on the subset of data. (I think that is easier than building the legend yourself.)

#Getting the legend to work!
for g in pd.unique(dat['group']):
    dat.loc[dat['group']==g,'log_vals'].hist(bins=100,alpha=0.65,
                                             label=g,density=True)
plt.legend(loc='upper left')

Besides the density=True to get the areas to be the same size, another trick that can sometimes be helpful is to weight the statistics by the inverse of the group size. The Y axis is not really meaningful here, but this sometimes is useful for other chart stats as well.

#another trick, inverse weighting
dat['inv_weights'] = 1/dat.groupby('group')['vals'].transform('count')
for g in pd.unique(dat['group']):
    sub_dat = dat[dat['group']==g]
    sub_dat['log_vals'].hist(bins=100,alpha=0.65,
                             label=g,weights=sub_dat['inv_weights'])
plt.legend(loc='upper left')    

So far, I have plotted the logged values. But I often want the labels to show the original values, not the logged ones. There are two different ways to deal with that. One is to plot the original values, but then use a log scale axis. When you do it this way, you want to specify your own bins for the histogram. Here I also show how you can use StrMethodFormatter to return a money value. Also rotate the labels so they do not collide.

#Specifying your own bins on original scale
#And using log formatting
log_bins = np.exp(np.arange(0,12.1,0.1))
ax = dat['vals'].hist(bins=log_bins, alpha=0.8)
plt.xscale('log', basex=10)
ax.xaxis.set_major_formatter(StrMethodFormatter('${x:,.0f}'))
plt.xticks(rotation=45)

If you omit the formatter option, you can see the returned values are 10^2, 10^3 etc. Besides log base 10, folks should often give log base 2 or log base 5 a shot for your data.

Another way though is to use our original logged values, and change the format in the chart. Here we can do that using FuncFormatter.

#Using the logged scaled, then a formatter
#https://napsterinblue.github.io/notes/python/viz/tick_string_formatting/
def exp_fmt(x,pos):
    return '${:,.0f}'.format(np.exp(x))
fmtr = FuncFormatter(exp_fmt)

ax = dat['log_vals'].hist(bins=100, alpha=0.8)
plt.xticks(np.log([5**i for i in range(7)]))
ax.xaxis.set_major_formatter(fmtr)
plt.xticks(rotation=45)

On the slate is to do some other helpers for scatterplots and boxplots. The panda defaults are no doubt good for EDA, but need some TLC to make more presentation ready.

An example of soft constraints in linear programming

Most of the prior examples of linear programming on my site use hard constraints. These are examples where I say to the model, “only give me results that strictly meet these criteria”, like “only select 40 cases to audit”, or “keep the finding rate over 50%”, etc.

There are alternative ways though to tell the model, “I want to select a finding rate over 50%, but still potentially consider those with lower finding rates”. One way to do that is via soft constraints, modifying the objective function directly to penalize (or favor) particular outcomes. For example, say you knew you could translate a 1% finding rate difference over 50% to a value of $1000. So if our original model is:

Maximize Sum{D_i*Return_i} 

Subject To
  D_i element of (0,1) #decision variables are 0/1
  Sum{D_i} = 100       #so select 100 cases

We would then place an additional penalty term that looks like this:

Maximize Sum{D_i*Return_i} + Sum{D_i*[(prob_i - 0.5)*1000]}  

Subject To
  D_i element of (0,1)
  Sum{D_i} = 100

So instead of a subject to constraint that says we need to be over 50% finding rate, we added a second penalty term for solutions that have an under 50% finding rate. So here if the finding rate in the end is 49%, it takes a hit of $1000 in the objective function. This example is also similar to a bi-objective function, here I just set an exact translation between finding rates and returns, but in practice often you don’t have that exact translation.

It just depends on your situation whether hard constraints or soft constraints make sense. Many situations you can swap one for the other, so different means same ends. For a good example of this, my allocating police resources paper on reducing disproportionate minority contact uses hard constraints (Wheeler, 2020), and George Mohler and colleagues have a very similar paper which uses soft constraints (Mohler et al., 2018). I imagine these will end up being very similar ends, although in that circumstance I prefer my hard constraint approach, as George’s you need to fiddle with the magnitude of the penalty term. Also I don’t tend to like changing the loss function for statistical/machine learning models, I just like changing what you do with the info after you have fit your model (Kleinberg et al., 2018).

Here I provide an example of where I think soft constraints make a bit of sense though. Imagine you have continuous predictive outputs, you need to make a binary yes/no decision among those options, but those predictive outputs also have a variance. An example of where this comes up is if you are making loan decisions, you want your portfolio to have a high return, but you also want to lower the variance of those returns as well.

For a simple example, imagine you are the lending institution and you have the choice between two scenarios:

  • Scenario A: lend 1 person $100,000 with an expected return of $8,000, with a variance of $4,000
  • Scenario B: lend 2 people $50,000, with an expected return of each for $4,000 each, with a variance of $1,000 for each loan

Since you expect to make the same amount of money under each scenario, option B is preferable if the loans are independent of each other (e.g. one going under does not cause the other to go under). In that case, variances are additive, so the total variance of option B is $2,000, so has much less volatility than does the A scenario.

(Sorry to my criminology friends, this example is generic but I strain to find a criminal justice example to apply it to. It would not be crazy that you have low volatility vs high volatility hot spots. So you may want to choose a consistent hot spot as oppossed to a fleeting one for an intervention. But I don’t think that will happen in practice quite like that. Choosing among expensive high risk/reward vs inexpensive treatment regimes low risk/reward in corrections settings may also make sense, but that is crazy pie in the sky technocratic given the current state of affairs as well.)

Example with Lending Club Data

So to illustrate an example with actual data, I’ve provide Python code fitting a predictive model to Lending Club data on loans. (I got the original dataset from Kaggle.) I am just going to highlight some key points here in the blog post. You will need to go to the code to see everything.

First, I’ve been introduced to this dataset as predicting a binary default/no-default. I have code doing that in the code snippet as well, and it performed OK. But it was very uncalibrated as to whether my portfolio made money – so even though the default estimates were pretty well spot on, my portfolios did not make much money. This is because people who default pay back some loans, and also quite a few people in the dataset pay back the loans fast, so the lenders don’t make as much as interest as you would expect at the start of the loan.

So I cut out the middle man and just estimated a random forest model predicting the actual money one made on the loan. I only kept cases that are either 'Fully Paid','Charged Off', 'Default', so I don’t model loans that are still ongoing. I end up modeling then the value total_pymnt - loan_amnt. You can look into the code to see the variables I included in the model, but one of the neat things about regression random forests is that you can not only get the mean prediction, you can also look at the variance over all the trees. See below a function to do that (in the 01 py file):

###############################################################
#Fit random forest model
model = RandomForestRegressor(n_estimators=1000, 
                              random_state=10, 
                              min_samples_leaf=100)
model.fit(train[x_vars], train[y_var])

#Check the predicted vs actual on the test set
y_pred = model.predict(test[x_vars]) #predicted mean
test['y_pred'] = y_pred

#I want an estimate of the variance
def tree_var(X, rf_mod):
    per_tree_pred = [pd.Series(tree.predict(X), index=X.index) for tree in rf_mod.estimators_]
    pd_res = pd.concat(per_tree_pred, axis=1)
    pd_var = pd_res.var(axis=1)
    return pd_var

test['y_var'] = tree_var(X=test[x_vars], rf_mod=model)
###############################################################

And that predicted value and variance are then what I feed into my subsequent linear programming problem (in the 02 py file). The model in some more text is:

Maximize Sum{D_i*(prediction - lambda*variance)}

Subject To:
  D element of (0,1)                #decision variables
  Sum{D_i*loan_amount_i} <= 300000  #only have so much $ to loan, so no leveraging
  

Where lambda is the tuner – higher variances will get higher penalties. So going back to our two loan example, if lambda = 1, scenario A it would be 8000 - 1*4000 = 4000, and scenario B would be 8000 - 1*2000 = 6000, so that penalty would choose scenario B over A. Whereas without the penalty the two scenarios are exactly the same.

Since the lambda value is arbitrary, I illustrate the approach selecting portfolios of loans that are a total of $300,000 (I divided the loans by $1000 to make the numbers a bit easier to view). This is a totally held out sample of around 5k loans. So you can see my first model (with no constraints):

And my second model with a higher lambda value of 1 selects more (smaller) loans, and reduces the variance. You can see since we have the actual outcomes, I can show that both portfolios turned a profit, each above what I predicted. But the standard deviation for second portfolio is cut not quite in half.

So you can see in that one selection it worked out OK, but this does not verify that my variance estimates are correct (they are no doubt too small, as you can see the actual returns are way higher than I predicted).

To test them out though I do a simulation. I draw 1000 cases out of those 5000, and then again pick $300k in loans. I do that process 1000 times under a set of different lambda penalty terms, [0, 0.5, 1.0, 2.0, 3.0]. For those simulations, here is the overall distribution of the returns under different penalty terms for the variance.

Note that those histograms have different X axes. It is easier to see the moments of the spread in a boxplot:

So here you can see each scenario has pretty near the same median return (somewhere around $30k), but the penalties reduce the variance. The higher penalties end up selecting portfolio’s that always at least make money, whereas the lower penalty terms you do end up losing money in some scenarios.

Unfortunately I did not beat the market in my simple weekend experimentation, so don’t sink a bunch of money into Lending club based on this! The average returns starting from $300k are something like an annualized rate of return of around 3% (over 3 years) for the smaller simulation pick from 1000. These include loans of 60 months as well though. So even though my linear program with the penalty term did a good job of reducing the risk, this isn’t good enough returns for me to put a bunch of my money into Lending Club.

But there is no doubt improvements both to the modeling as well as the portfolio selection. For modeling I would be tempted to try out a discrete time survival model for payments over time, but that would be more work than I could do in a weekend. (Also I only incorporated the easy continuous variables here I could prepare in just a few minutes, so maybe more feature engineering would boost my results.)

I could also adapt the linear program to take into account covariance between the loans, but not sure how to estimate them (a multi-level model perhaps?). You also may want to do some sort of conditional value at risk approach in the linear program, say instead of piping in the variance from random forests, count up how often you lose money and put that as a penalty or constraint on the system.

References

GPU go brrr: Estimating OLS (with standard errors) via deep learning

So a bunch of my criminologists friends have methods envy. So to help them out, I made some python functions to estimate OLS models using pytorch (a deep learning python library). So if you use my functions, you can just append something like an analysis with GPU accelerated deep learning to the title of your paper and you are good to go. So for example, if your paper is An analysis of the effects of self control on asking non-questions at an ASC conference, you can simply change it to A GPU accelerated deep learning analysis of the effects of self control on asking non-questions at an ASC conference. See how that works, instantly better.

Code and data saved here. There are quite a few tutorials on doing OLS in deep learning libraries, only thing special here is I also calculate the standard errors for OLS in the code as well.

python code walkthrough

So first I just import the libraries I need. Then change the directory to wherever you saved the code on your local machine, and then import my deep_ols script. The deep_ols script also relies on the scipy.stats library, but besides pytorch it is the usual scientific python stack.

import os
import sys
import torch
import statsmodels.api as sm
import pandas as pd
import numpy as np

###########################################
#Setting the directory and importing
#My deep learning OLS function

my_dir = r'C:\Users\andre\OneDrive\Desktop\DeepLearning_OLS'
os.chdir(my_dir)
sys.path.append(my_dir)
import deep_ols
############################################

For the dataset I am using, it is a set of the number of doctor visits I took from some of the Stata docs.

#Data from Stata, https://www.stata-press.com/data/r16/gsem_mixture.dta
#see pg 501 https://www.stata.com/manuals/sem.pdf

visit_dat = pd.read_stata('gsem_mixture.dta')
visit_dat['intercept'] = 1
y_dat = visit_dat['drvisits']
x_dat = visit_dat[['intercept','private','medicaid','age','educ','actlim','chronic']]

print( y_dat.describe() )
print( x_dat.describe() )

Then to estimate the model it is simply below, the function returns the torch model object, the variance/covariance matrix of the coefficients (as a torch tensor), and then a nice pandas data frame of the results.

mod, vc, res = deep_ols.ols_deep(y=y_dat,x=x_dat)

As you can see, it prints out GPU accelerated loss results so fast for your pleasure.

Then, like a champ you can see the OLS results. P-values < 0.05 get a strong arm, those greater than this get a shruggie. No betas to be found in my model results.

Just to confirm, I show you get the same results using statsmodel.

stats_mod = sm.OLS(y_dat,x_dat)
sm_results = stats_mod.fit()
print(sm_results.summary())

So get on my level bro and start estimating your OLS models with a GPU.

Some Notes

First, don’t actually use this in real life. I don’t do any pre-processing of the data, so if you have some data that is not on a reasonable scale (e.g. if you had a variable income going from 0 to $500,000) all sorts of bad things can happen. Second, it is not really accelerated – I mean it is on the GPU and the GPU goes brr, but I’m pretty sure this will not ever be faster than typical OLS libraries. Third, this isn’t really “deep learning” – I don’t have any latent layers in the model.

Also note the standard errors for the coefficients are just calculated using a closed form solution that I pilfered from this reddit post. (If anyone has a textbook reference for that would appreciate it! I Maria Kondo’d most of my text books when I moved out of my UTD office.) If I figure out the right gradient calculations, I could probably also add ‘robust’ somewhere to my suggested title (by using the outer product gradient approach to get a robust variance/covariance matrix for the coefficients).

As a bonus in the data file, I have a python script that shows how layers in a vanilla deep learning model (with two layers) are synonymous with a particular partial least squares model. The ‘relu activation function’ is equivalent to a constraint that restricts the coefficients to be positive, but otherwise will produce equivalent predictions.

You could probably do some nonsense of saving the Jacobian matrix to get standard errors for the latent layers in the neural network if you really wanted to.

To end, for those who are really interested in deep learning models, I think a better analogy is that they are just a way to specify and estimate a set of simultaneous equations. Understanding the nature of those equations will help you relate how deep learning is similar to regression equations you are more familiar with. The neuron analogy is just nonsense IMO and should be retired.

Some more peer review musings

It is academics favorite pastime to complain about peer review. There is no getting around it – peer review is degrading, both in the adjective sense of demoralizing, as well as in the verb ‘wear down a rock’ sense.

There is nothing quite like it in other professional spheres that I can tell. For example if I receive a code review at work, it is not adversarial in nature like peer review is. I think most peer reviewers treat the process like a prosecutor, poking at all the minutia that they can uncover, as opposed to being judges of the truth.

At work I also have a pretty unobjectional criteria for success – if my machine learning model is making money then it is good. Not so for academic papers. Despite everyone learning about the ‘scientific method’, academics don’t really have a coda to follow like that. And that lack of objective criteria causes quite a bit of friction in the peer review process.

But all the things I think are wrong with peer review I have a hard time articulating succinctly. So we have a bias problem, in that reviewers have preferences for particular methods or styles. We have a problem that individuals get judged based on highly arbitrary standards of what is interesting. Many critiques are focused on highly pedantic things that make little to no material difference, such as the use of personal pronouns. Style advice can be quite bad to be frank, in my career I’m up to something like four different peer reviews saying providing results in the introduction is inappropriate. Edit: And these complaints are not exhaustive as well, we have reviewers pile on endless complaints in multiple rounds, and people phone it in with nonsense short descriptions as well. (I’m sure I can continue to add to the list, but I’ve personally experienced all of these things, in addition to being called a racist in peer review.)

I’ve previously provided advice about how I think peer reviews should be done. To sum up my advice:

  • Differentiate between big problems and minor stuff
  • Be very specific what things you want changed (and how to change them)

I think I should add two to this as well – don’t be a jerk, and don’t pile on (or be respectful of peoples time). For the jerk part I don’t even meet that standard if I am being honest with myself. For one of my peer reviews I am going to share a later round I got pretty snippy (with the Urban folks on their CCTV paper in Milwaukee), that was over the top in retrospect. (I don’t think reviewers should get a say on revisions, editors should just arbiter whether the responses by the original authors were sufficient. I am pretty much never happy if I suggest something and an author says no thanks.)

For the pile on part, I recently posted my responses to my cost of crime hot spots paper. Although all three reviewers were positive, it still took me ~40 hours to respond to all of the critiques. Even though all of the reviews were in good faith, it honestly was not worth my time to make those revisions. (I think two were legitimate, changing the front end to say my hot spots are crime cost, not crime harm, and the ask for more details on the Hunt estimates. The rest were just fluff though.)

If you do consulting think about your rate – and whether addressing all those minor critiques meet the threshold of ‘is the paper improved by the amount to justify my consulting fee’. My experience it does not come close, and I am quite a cheap consultant.

So I have shared in the past a few examples of my response to reviewers (besides above, see here for the responses to my how to select participants for focussed deterrence paper, and here for my tables and graphs for crime analysis paper). But maybe instead of bagging on others, I should just try to lead by example.

So below are several of my recent reviews. I’ve only pulled out the recent ones that I know the paper has been published. This is then subject to a selection bias, papers I have more negative things to say are less likely to be published in the end. So keep that bias in mind when judging these.

The major components of how I believe I am different from the modal reviewer is I subdivide between major and minor concerns. Many reviewers take the running commentary through the paper approach, which not only does not distinguish between major/minor, but many critiques are often explicitly addressed (just at a different point in the manuscript from where it popped into the reviewers head).

The way I do my reviews I actually do the running commentary in the side of the paper, then I sleep on it, then I organize into the major sections. In this process of organizing I drop quite a bit of minor fluff, or things that are cleared up at later points in the paper. (I probably end up deleting ~50% of my original notes.)

Second, for papers I give a thumbs up for I take actual time to articulate why they are good. I don’t just give a complement sandwich and pile on a series of minor critiques. Positive comments are fleeting in peer review, but necessary to judge a piece.

So here are some of my examples from peer review, take them for what they are worth. I no doubt do not always follow my advice I lay out above, by try my best to.


Title: Going Local: Do Consent Decrees and Other Forms of Federal Intervention in Municipal Police Departments Reduce Police Killings?

The article is well written and a timely topic. The authors have a quote that I think sums it up quite nicely “This paper is therefore the first to evaluate the effects of civil rights investigations and the consent decree process on an important – arguably the most important – measure of use of force: death.” Well put.

The analysis is also well executed. It is observational, but city fixed effects and the event history study were the main things I was looking for.

I have a three major revision requests. But they are just editing the manuscript type stuff, so can be easily accomplished by the authors.

  1. Drop the analysis of Crime Rates

While I think the analysis of officer deaths is well done, the analysis of crime rates I have more doubts about. Front end is not focused on this at all, and would need a whole different section about it discussing recent work in Chicago/Baltimore/NYC. Also I don’t think the same analysis (fixed effects), is sufficient for crime trends – we are basically talking about the crime drop period. Also very important to assess heterogeneity in that analysis – a big part of the discussion is that Chicago/Baltimore are different than NYC.

The analysis of officer deaths is sufficient to stand on its own. I agree the crime rates question is important – save it for another paper where it can be given enough attention to do the topic justice.

  1. Conclusion

Like I said, I was most concerned about the event study, and the authors show that there was some evidence of pre-treatment selection trends. You don’t talk about the event study results in the conclusion though. It is a limitation that the event study analysis had less clear evidence of crime reductions than the simpler pre/post analysis.

This is likely due to larger error bars with the rare outcome, which is itself a limitation of using shootings as the outcome. I think it deserves to be mentioned even if overall the effects of consent decrees are not 100% clear on reducing officer involved deaths, locally monitoring more frequent outcomes is worthwhile. See the recent work by MacDonald/Braga in NYC. New Orleans is another example, https://journals.sagepub.com/doi/full/10.1177/1098611117709785.

I note the results are important to publish without respect to the results of the analysis. It would be important to publish this work even if it did not show reductions in officer involved deaths.

  1. Front end lit review

For 2.2 racial disparities, this section misses much of the recent work on the topic of officer involved shootings except for Greg Ridgeway’s article. There is a wide array of work that uses other counterfactual approaches to examine implicit bias, see Roland Fryer work (https://www.journals.uchicago.edu/doi/abs/10.1086/701423), or the Wheeler/Worrall papers on Dallas shoot/don’t shoot data (https://journals.sagepub.com/doi/full/10.1177/1525107118759900 & https://journals.sagepub.com/doi/full/10.1177/0011128718756038). These are the observational duel to the experimental lab work by Lois James. Also there are separate papers by Justin Nix (https://www.tandfonline.com/doi/full/10.1080/0735648X.2018.1547269), and Joseph Cesario (https://journals.sagepub.com/doi/10.1177/1948550618775108) that show when using different benchmarks estimates of disparity can vary by quite abit.

For the use of force review (section 2.3), people typically talk about situational factors that influence use of force (in addition to individual/extra-legal and organizational). So you may say “consent decrees don’t/can’t change situational behavior of offenders, so what is the point of writing about it” tis true, but it still should be articulated. To the extent that situational factors are a large determinant of shootings, it may be consent decrees are not the right way to reduce officer deaths then if it is all situational. But, consent decrees may indirectly effect police/citizen interactions (such as via de-escalation or procedural justice training), that could be a mechanism through which fewer officer deaths occur.

Long story short, 2.2 should be updated with more recent work on officer involved shootings and the benchmark problem, and 2.3 should be updated to include discussion of the importance of situational factors in the use of force.

Additional minor stuff:

  • pg 12, killings are not a proxy for use of force (they count as force!)
  • regression equations need some editing. Definitely need a log or exponential function on one of the sides of the equation, and generalized linear models do not have an error term like linear models do. I personally write them as:

log(E[crime_it]) = intercept + B1*X + …..

where E[crime_it] is the expected value of crime at place i and time t (equivalent to lambda in your current formulation).

  • pg 19 monitor misspelled (equation type)

Title: Immigration Enforcement, Crime and Demography: Evidence from the Legal Arizona Workers Act

Well done paper, uses current status quo empirical techniques to estimate the effect employment oversight for illegal immigrant workers had on subsequent crime reductions.

Every critique I thought of was systematically addresses in the paper already. Discussed issues with potential demographic spillovers (biasing estimates because controls may have crime increases). Eliminating states from the pool of placebos with stronger E-Verify support, and later on robustness checks for neighboring states. And using some simple analyses to give decompositions that would be expected due to the decrease in the share of young males.

Minor stuff

  • Light and Miller quote on pg 21 not sure if it is space/dash missing or just kerning problems in LaTex
  • Pg 26, you do the estimates for 08 and 09 separately, but I think you should pool 08-09 together in that table (the graphs do the visual tests for 08 & 09 independently). For example, Violent crimes are negative for both 08 & 09, which in the graphs are not outside the typical bounds for each individual year, but cumulatively that may be quite low (most will have a variable low and then high). This should get you more power I think given the few potential placebo tests. So it should be something like (-0.067 + -0.108) with error bars for violent crimes.
  • I had to stare at your change equation (pg 31) for quite a bit. I get the denominator of equation 2, I’m confused about the numerator (although it is correct). Here is how I did it (using your m1, m2, a, and X).

Pre-Crime = m1*aX + (1 - m1)X = X * (m1*a + 1 - m1)

Post-Crime = m2*aX + (1 - m2)X = X * (m2*a + 1 - m2) #so you can drop the X

% Change = (Post - Pre) / Pre = Post/Pre - 1

At least that is easier for me to wrap my head around. Also should m1 and m2 be the overall share of young adults? Not just limited to immigrants? (Since the estimated crime reduction is for everybody, not just crimes committed by immigrants?)


Title: How do close-circuit television cameras impact crimes and clearances? An evaluation of the Milwaukee Police Department’s public surveillance system

Well written paper. Uses appropriate quasi-experimental methods (matching and diff-in-diff) to evaluate reductions in crimes and potential increases in case clearances.

I have some minor requests on analysis/descriptive stats, but things that can be easily addressed by the authors.

First, I would request a simpler pre-post DiD table. The panel models not only introduce more complications of interpretation, but they are based on asymptotics that I’m not sure if they are met here.

So if you do something like:

              Pre Crime   Post Crime  Difference DiD
Treated      100          80              -20         -30
Control      100        110               10  

It is much more straightforward to interpret, and I provide a stat test and power analysis advice in https://crimesciencejournal.biomedcentral.com/articles/10.1186/s40163-018-0085-5. Your violent crime counts are very low, so I think you would need unrealistic effects (of say 50% reductions) to detect an effect with your group sizes.

You can do the same table for % clearances, and do whatever binomial test of proportions (which will have much more power than the regression equation). And again is simpler to interpret.

Technically doing a poisson regression with an exposure is not the same as modelling the clearance counts with total crimes as an exposure. The predicted PMF can technically go above 1 (so you can have %’s above 100%). It would be more appropriate to use binomial regression, so something like below in Stata:

glm arrests i.Treat i.Post Treat#Post i.Unit, family(binomial crimes) link(logit)

(No xtbinomial unfortunately, maybe can coerce xtlogit with weights to do it, or use meglm since you are doing random effects. I think you should probably do fixed effects here anyway.) I don’t know if it will make a difference in the results here though.

Andy Wheeler


Title: The formation of suspicion: A vignette study

Well done experimental study examining suspiciousness using a vignette approach. There is no power analysis done, but it is a pretty large sample, and only examines first order effects. (With a note about examining a well powered interaction effect described in the analysis section.)

My only big ask is that the analysis should include a dummy variable for the two different sampling frames (e.g. a dummy variable for 1=New York). Everything else is minor/easy stuff.

Minor stuff:

  • How were the vignettes randomized? (They obviously were, balance is really good!)
  • For the discussion, it is important to understand these characteristics that start an interaction because of another KT heuristic bias – anchoring effects. Paul Taylor has some recent work of interest on Dispatch priming that is relevent. (Also Dan Mears had a recent overview paper on biases/heuristics in Journal of Criminal Justice I also think should probably be cited.)
  • For Table 1 I would label the “Dependent Variable” with a more descriptive label (suspiciousness)
  • Also people typically code the variables 0/1 instead of 1/2, it actually won’t impact the analysis here, since it is just a linear shift of +1 (just change the intercept term). The variables of Agency Size, Education, & Experience are coded as ordinal variables though, and they should maybe be included as dummy variables for each category. (I don’t think this will make a big difference though for the main randomized variables of interest though.)

Title: The criminogenic effect of marijuana dispensaries in Denver, Colorado: A microsynthetic controls quasi-experiment and cost-benefit analysis

This work is excellent. It is a timely topic, as many states are still considering whether to legalize and what that will exactly look like.

The work is a replication/extension of a prior study also in Denver, but this is a stronger research design. Using the micro units allows for matching on pre-trends and drawing synthetic controls, which is a stronger design than prior pre/post work at neighborhood level (both in Denver and in LA). Micro units are also more relevant to test direct criminogenic effects of the stores. The authors may be interested in https://www.tandfonline.com/doi/full/10.1080/0735648X.2019.1582351, which is also a stronger research design using matched comparison groups, but is only for medical dispensaries in DC and is a much smaller sample.

Even if one considers that to be a minor contribution (crime increase findings are similar magnitude to Hughes JQ paper), the cost benefit analysis is a really important contribution. It hits on all of the important considerations – that even if the book of costs/benefits is balanced, they are relevant to really different segments of society. So even if tax revenue offsets the books, places may still not want to take on that extra crime burden.

Only two (very minor) suggestions. One, some of the permutation lines are outside of the figure boundaries. Two, I would like a brief ado in the discussion mentioning the trade-off in economic investment making places busier (which fits right into the current discussion of how costs-benefits). Likely if you added 100 ice-cream shops crime might go up due to increased commercial activity – weed has the same potential negative externality – but is not necessarily worse than say opening a bunch of bars or convenience stores. (Same thing is relevant for any BID, https://journals.sagepub.com/doi/full/10.1177/0011128719834559.)


Title: Understanding the Predictors of Street Robbery Hot Spots: A Matched Pairs Analysis and Systematic Social Observation

Note: Reviewed for Justice Quarterly and rejected. Final published version is at Crime & Delinquency (which I was not a reviewer for)

The article uses the case-control method to match high-crime to low-crime street segments, and then examine local land use factors (bars, convenience stores, etc.) as well as the more novel source of physical disorder coded from Google Street View images. The latter is the main motivation for the case-control design, as manual coding prevents one from doing the entire city.

Case-control designs by their nature you cannot manipulate the number of cases you have at your disposal. Thus the majority of such designs typically focus on ONE exposure of interest, and match on any other characteristic that is known to affect the outcome, but is not of direct interest to the study. E.g. if you examined lung cancer given exposure to vehicle emissions, you would need to match controls as to whether they smoked or not. This allows you to assess the exposure of interest with the maximum power given the design limitations, although you can’t say anything about smoking vs not smoking on the outcome.

The authors here match within neighborhoods and on several other local characteristics, but then go onto examine different crime generators (7 factors), as well as the two disorder coded variables. Given that this is a fairly small sample size of 129 matched cases, this is likely a pretty underpowered research design. Logistic regression relies on asymptotic properties, and even with fewer variables it is questionable whether 260 cases is sufficient, see https://www.tandfonline.com/doi/abs/10.1080/02664763.2017.1282441. Thus you get in abstract terms fairly large odds ratios, but are still not significant (e.g. physical disorder has an odds ratio of 1.7, but is insignificant). So you have low power to test all of those coefficients.

I believe a stronger research design would focus on the novel measures (the Google Street View disorder), and match on the crime generator variables. The crime generator factors have been well established in prior research, so that work is a small contribution. The front end focuses on the typical crime generator typology, and lacks any broader “so what” about the disorder measures. (Which can be made, focusing on broken windows theory and the controversy it has caused.)

It would be feasible for authors to match on the crime generators, but it would result in different control cases, so they would need to code additional locations. (If you do match on crime generators, I think it is OK to include 0 crime areas in the pool. Main reason it is sometimes not fair to include 0 crime areas is because they are things like a bridge or a park.)

Minor notes:

  • You cannot interpret the coefficients in causal terms on which you matched in the conclusion. (Top page 23.) It only says the extent to which your matching was successful, not anything about causality. Later on you also attempt to weasel out of causal interpretations (page 26). Saying this is not causality, but otherwise interpreting regression coefficients as if they have any meaning is an act of cognitive dissonance.
  • Given that there is perfect separation between convenience stores and hot spots, the model should have infinite standard errors for that factor. (You have a few coefficients that appear to have explosive standard errors.)
  • I wouldn’t describe as you match on the dependent variable [bottom page 8]. I realize it is confusing mixing propensity score terminology with case-control (although the method is fine). You match cases-to-controls on the independent variables you choose at the onset.
  • page 5, Dan O’Brien in his work has shown that you have super-callers for 311. Which fits right into your point of how coding of images may be better, as a single person can bias the measure at micro places. (This is sort of the opposite of not calling problem you mention.)
  • You may be interested, some folks have tried to automate the scoring part using computer vision, see https://ieeexplore.ieee.org/document/6910072?tp=&arnumber=6910072&url=http:%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6910072 or https://mikebader.net/projects/canvas-usa/ . George Mohler had a talk at ASC where he used Google’s automated image labeling to identify disorder (like graffiti) in pictures https://cloud.google.com/vision/docs/labels

 

Adjusting predicted probabilities for sampling

Ryx had a blog post the other day about how many confuse how to turn predicted probabilities into a final classification 0/1 decision, Why Do So Many Practicing Data Scientists Not Understand Logistic Regression?. (Highly suggest Ryx’s blog and twitter feed in general, opinions expressed frequently mirror my own very closely.)

So I won’t go into why SMOTE (synthetic upsampling of the rare class) in general doesn’t make sense for most predictive applications here. (It may in some complicated machine learning scenarios I don’t fully understand.) But, there are a few scenarios where downsampling makes total sense. One is in case control studies, where it is costly to sample the control cases. (Motivated in part to write this as I reviewed a paper the other day that estimated marginal effects on the probability scale using case-control data, so they need to adjust them using the same technique I show here.) The other scenario, which I expect is more common for the working data scientist, is you just have a crazy large dataset, so you need to downsample just to fit the model of interest.

Say you are looking at fraud in bank transactions, and you have 500 million transactions and only 500,000 identified fraud cases. It makes total sense to take a sample of 500,000 control cases and then fit your models on the cases vs controls using whatever complicated model you want just so you can get an answer in a decent time on your local machine.

The predicted probabilities from that adjusted sample though will be wrong. But fortunately it is quite easy to adjust them back to the scale you want (and this will work just as well for SMOTE upsampling as well). I illustrate using an example in python and XGBoost. Most examples online show this for GLMs, but it works the same way for any model that returns a predicted probability.

So first lets load our libraries and create some simulated data. Here the positive class only occurs around 5% of the time.

#############################################
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from xgboost import XGBClassifier

#Creating simulated data

np.random.seed(10)
total_cases = 1000000
x1 = np.random.uniform(size=total_cases)
x2 = np.random.binomial(1,0.5,size=total_cases)
x3 = np.random.poisson(5,size=total_cases)
y_logit = -1 + 0.3*x1 + 0.1*x2 + -0.5*x3 + 0.05*x1*x2 + -0.03*x2*x3
y_prob = 1/(1 + np.exp(-y_logit))
y_bin = np.random.binomial(1,y_prob)
my_vars = ['x1','x2','x3','y_prob','y_bin']
sim_data = pd.DataFrame(zip(x1,x2,x3,y_prob,y_bin),
                        columns=my_vars)
x_vars = my_vars[:3]
y_var = my_vars[-1]

#Checking the distribution, make intercept larger if
#Not enough 0's to your liking
print( sim_data[y_var].mean() )
sim_data['y_prob'].hist(bins=100)
#############################################

The model is not that complicated (just some interactions), so hopefully XGBoost has no problem fitting it. But say we are in the ‘need to downsample because my data is too big scenario’. So here I create a training dataset that has a 50/50 split for the positive negative cases, and keep the test data the same.

#############################################
#Creating test/train samples

sim_data['index'] = range(sim_data.shape[0])
train = sim_data[sim_data['index'] <= 700000]
test = sim_data[sim_data['index'] > 700000]

#downsampling the training dataset
#so classes are equal
down_n = train['y_bin'].sum()
down_prop = train['y_bin'].mean()
down_neg = train[train['y_bin'] == 0].sample(n=down_n)
down_pos = train[train['y_bin'] == 1]
train_down = pd.concat([down_neg,down_pos],axis=0)

wrong_pr =  train_down[y_var].mean()
print( wrong_pr )
#############################################

So if you are following along in python, wrong_pr here is exactly 0.5 by construction. So next we fit our XGBoost model, generate the predicted probabilities on the test dataset, and then draw a lift-calibration chart. (If you are not familiar with what XGBoost is, I suggest this statquest series of videos. You can just pretend it is a black box here though that you get out predicted probabilities.)

#############################################
#Upping the number of trees for a higher resolution of 
#predicted probabilities
model = XGBClassifier(n_estimators=1000)
model.fit(train_down[x_vars], train_down[y_var])

#making predictions for the test set
#probabilities
y_pred = model.predict_proba(test[x_vars])
test['y_pred_down'] = y_pred[:,1]

#Now looking at the calibration
def cal_data(prob, true, data, bins, plot=False, figsize=(6,4), save_plot=False):
    cal_dat = data[[prob,true]].copy()
    cal_dat['Count'] = 1
    cal_dat['Bin'] = pd.qcut(cal_dat[prob], bins, range(bins) ).astype(int) + 1
    agg_bins = cal_dat.groupby('Bin', as_index=False)['Count',prob,true].sum()
    agg_bins['Predicted'] = agg_bins[prob]/agg_bins['Count']
    agg_bins['Actual'] = agg_bins[true]/agg_bins['Count']
    if plot:
        fig, ax = plt.subplots(figsize=figsize)
        ax.plot(agg_bins['Bin'], agg_bins['Predicted'], marker='+', label='Predicted')
        ax.plot(agg_bins['Bin'], agg_bins['Actual'], marker='o', markeredgecolor='w', label='Actual')
        ax.set_ylabel('Probability')
        ax.legend(loc='upper left')
        if save_plot:
            plt.savefig(save_plot, dpi=500, bbox_inches='tight')
        plt.show()
    return agg_bins

cal_down = cal_data(prob='y_pred_down',true=y_var,data=test,bins=60,
                    plot=True, figsize=(6,6)) 
#############################################

Oh my, you can see that our calibration is very bad. I am predicting something to happen 80% of the time in the top bin, but in reality it only happens 20% of the time. But we can fix it using an adjustment formula (originally saw the idea from Norm Matloff and rewrote an R function to python).

#############################################
#Rebalancing function

#rewrite from
#https://github.com/matloff/regtools/blob/master/inst/UnbalancedClasses.md
#and https://www.listendata.com/2015/04/oversampling-for-rare-event.htm
def classadjust(condprobs,wrongprob,trueprob):
    a = condprobs/(wrongprob/trueprob)
    comp_cond = 1 - condprobs
    comp_wrong = 1 - wrongprob
    comp_true = 1 - trueprob
    b = comp_cond/(comp_wrong/comp_true)
    return a/(a+b)

test['y_pred_adj'] = classadjust(test['y_pred_down'], wrong_pr, down_prop)

cal_adj = cal_data(prob='y_pred_adj',true=y_var,data=test,bins=60,
                    plot=True, figsize=(6,6))
#############################################

Alright, now we are much closer. It still is overpredicting at the highest classes (predicts 40% of the time when it should predict only 20%), but it is well calibrated for low predictions.

For kicks, I estimated the XGBoost model using the original sample data that is unbalanced and calculated the calibration plot.

#############################################
#What does it look like if I train with original?

model2 = XGBClassifier(n_estimators=1000) 
model2.fit(train[x_vars], train[y_var]) #takes a minute!

#making predictions for the test set
#probabilities
y_pred = model2.predict_proba(test[x_vars])
test['y_pred_orig'] = y_pred[:,1]

cal_adj = cal_data(prob='y_pred_orig',true=y_var,data=test,bins=60,
                    plot=True, figsize=(6,6))
#############################################

So this is slightly better than the adjusted downsampled XGBoost, but it still shows it is overpredicting for the tail of the dataset. But the overprediction here is like 31% vs 23%, where prior we were talking about 40% vs 20%.

Why these calibration metrics matter is that to generate estimates of how much money your model is making in practice will almost always rely on correctly estimating predicted probabilities, which translate into true positives and false negatives. If the model is not well calibrated, your estimates of these will be gravely off.

It doesn’t always matter, these transformations don’t change the rank order of the predictions. So say you are always just grabbing the top 100 predictions, this adjustment does not change what predictions are in the top 100. It will change how many of those cases though you expect to translate into true positives though.

 

Creating a basemap in python using contextily

Me and Gio received a peer review asking for a nice basemap in Philadelphia showing the relationship between hospital locations and main arterials for our paper on shooting fatalities.

I would typically do this in ArcMap, but since I do not have access to that software anymore, I took some time to learn the contextily library in python to accomplish the same task.

Here is the map we will be producing in the end:

So if you are a crime analyst working for a specific city, it may make sense to pull the original vector data for streets/highways and create your own style for your maps. That is quite a bit of work though, so for a more general solution these basemaps are really great. (And are honestly nicer than I could personally make even with the original vector data anyway).

Below I walk through the python code, but the data to replicate my paper with Gio can be found here, including the updated base map python script and shapefile data.

Front matter

So first, I have consistently had a difficult time working with the various geo tools in python on my windows machine. Most recently the issue was older version of pyproj and epsg codes were giving me fits. So at the recommendation of the geopandas folks, I just created my own conda environment for geospatial stuff, and that has worked nicely so far.

So here I need geopandas, pyproj, & contexily as non-traditional libraries. Then I change my working directory to where I have my data, and then update my personal matplotlib defaults.

'''
Python script to make a basemap
For Philadelphia
'''

import geopandas
import pyproj
import contextily as cx
import matplotlib
import matplotlib.pyplot as plt
import os
os.chdir(r'D:\Dropbox\Dropbox\School_Projects\Shooting_Survival_Philly\Analysis\OriginalData'

#Plot theme
andy_theme = {'axes.grid': True,
              'grid.linestyle': '--',
              'legend.framealpha': 1,
              'legend.facecolor': 'white',
              'legend.shadow': True,
              'legend.fontsize': 14,
              'legend.title_fontsize': 16,
              'xtick.labelsize': 14,
              'ytick.labelsize': 14,
              'axes.labelsize': 16,
              'axes.titlesize': 20,
              'figure.dpi': 100}

matplotlib.rcParams.update(andy_theme)

Data Prep with geopandas & pyproj

The next part we load in our shapefile into a geopandas data frame (just a border for Philly), then I just define the locations of hospitals (with level 1 trauma facilities) in text in the code.

Note that the background is in projected coordinates, so then I use some updated pyproj code to transform the lat/lon into the local projection I am using.

I thought at first you needed to only use typical web map projections to grab the tiles, but Dani Arribas-Bel has done a bunch of work to make this work for any projection. So I prefer to stick to projected maps when I can.

If you happened to want to stick to typical web map projections though geopandas makes it quite easy using geo_dat.to_crs('epsg:4326').

#####################################
#DATA PREP

ph_gp = geopandas.GeoDataFrame.from_file('City_Limits_Proj.shp')

#Locations of the hospitals
hos = [('Einstein',40.036935,-75.142657),
       ('Hahneman',39.957284,-75.163222),
       ('Temple',40.005507,-75.150257),
       ('Jefferson',39.949121,-75.157631),
       ('Penn',39.949819,-75.192883)]

#Convert to local projection
transformer = pyproj.Transformer.from_crs("epsg:4326", ph_gp.crs.to_string())
hx = []
hy = []
for h, lat, lon in hos:
    xp, yp = transformer.transform(lat, lon)
    hx.append(xp)
    hy.append(yp)
#####################################

Making the basemap

Now onto the good stuff. Here I use the default plotting methods from geopandas boundary plot to create a base matplotlib plot object with the Philly border outline.

Second I turn off the tick marks.

Next I have some hacky code to do the north arrow and scale bar. The north arrow is using annotations and arrows, so this just relies on the fact that north is up in the plot. (If it isn’t, you will need to adjust this for your map.)

The scale bar is more straightforward – I just plot a rectangle on the matplotlib plot, and then put text in the middle of the bar. Since the projected units are in meters, I just draw a rectangle that is 5 kilometers longways.

Then I add in the hospital locations. Note I gave the outline a label, as well as the hospitals. This is necessary to have those objects saved into the matplotlib legend. Which I add to the plot after this, and increase the default size.

Finally I add my basemap. I do not need to do anything special here, the contextily add_basemap function figures it all out for me, given that I pass in the coordinate reference system of the basemap. (You can take out the zoom level argument at first, 12 is the default zoom for Philly.)

Then I save the file to a lower res PNG.

#####################################
#Now making a basemap in contextily

ax = ph_gp.boundary.plot(color='k', linewidth=3, figsize=(12,12), label='City Boundary', edgecolor='k')
#ax.set_axis_off() #I still want a black frame around the plot
ax.get_xaxis().set_ticks([])
ax.get_yaxis().set_ticks([])

#Add north arrow, https://stackoverflow.com/a/58110049/604456
x, y, arrow_length = 0.85, 0.10, 0.07
ax.annotate('N', xy=(x, y), xytext=(x, y-arrow_length),
            arrowprops=dict(facecolor='black', width=5, headwidth=15),
            ha='center', va='center', fontsize=20,
            xycoords=ax.transAxes)

#Add scale-bar
x, y, scale_len = 829000, 62500, 5000 #arrowstyle='-'
scale_rect = matplotlib.patches.Rectangle((x,y),scale_len,200,linewidth=1,edgecolor='k',facecolor='k')
ax.add_patch(scale_rect)
plt.text(x+scale_len/2, y+400, s='5 KM', fontsize=15, horizontalalignment='center')

#Add in hospitals as points
plt.scatter(hx, hy, s=200, c="r", alpha=0.5, label='Trauma Hospitals')

#Now making a nice legend
ax.legend(loc='upper left', prop={'size': 20})

#Now adding in the basemap imagery
cx.add_basemap(ax, crs=ph_gp.crs.to_string(), source=cx.providers.CartoDB.Voyager, zoom=12)

#Now exporting the map to a PNG file
plt.savefig('PhillyBasemap_LowerRes.png', dpi=100) #bbox_inches='tight'
#####################################

And voila, you have your nice basemap.

Extra: Figuring out zoom levels

I suggest playing around with the DPI and changing the zoom levels, and changing the background tile server to see what works best given the thematic info you are superimposing on your map.

Here are some nice functions to help see the default zoom level, how many map tiles need to be downloaded when you up the default zoom level, and a list of various tile providers available. (See the contextily github page and their nice set of notebooks for some overview maps of the providers.)

#####################################
#Identifying how many tiles
latlon_outline = ph_gp.to_crs('epsg:4326').total_bounds
def_zoom = cx.tile._calculate_zoom(*latlon_outline)
print(f'Default Zoom level {def_zoom}')

cx.howmany(*latlon_outline, def_zoom, ll=True) 
cx.howmany(*latlon_outline, def_zoom+1, ll=True)
cx.howmany(*latlon_outline, def_zoom+2, ll=True)

#Checking out some of the other providers and tiles
print( cx.providers.CartoDB.Voyager )
print( cx.providers.Stamen.TonerLite )
print( cx.providers.Stamen.keys() )
#####################################

Resources of interest for criminologists and crime analysts

I tend to get about one email per week asking for help. Majority of folks are either students asking for general research advice, or individuals who came across my webpage asking for advice about code.

This is great, and everyone should always feel open to send me an email. The utility of me answering these questions (for everyone) are likely greater than spending time working on a paper, so I do not mind at all. I can currently keep up with the questions given the volume (but not by much, and is dependent on how busy I am with other work/family things). Worst case I will send an email response that says sorry I cannot respond to this anytime soon.

Many times there are other forums though for people to post questions that are ultimately better. One, I participate in many of these, so it is not like sending an email just to me, it is like sending an email to me + 40 other people who can answer your question. Also from my perspective it is better to answer a question once in one of these forums, than repeat the same answer a dozen different times. (Many times I write a blog post if I get the same question multiple times.)

While the two groups overlap a bit, I separate out resources aimed at criminologists (as typically more interested in research and are current master/PhD students), whereas crime analysts are embedded in a criminal justice agency.

For Criminologists

For resources on where to ask questions, Jacob Kaplan recently created a slack channel, crimhelp.slack.com. It has been joined by a variety of criminologists, folks in think tanks/research institutes, current graduate students, and some working crime analysts. It is new, but you can go and peruse the topics so far, they are pretty wide in scope.

So that forum you can really ask about anything crim related, the remaining resources are more devoted towards programming/statistical analysis.

If you are interested in statistical or programming questions, I used to participate in StackOverflow, Cross Validated (the stats site), and the GIS site. They are good places to check out prior answers, and are worth a shot asking a question on occasion. For tricky python or R coding questions that are small in scope, StackOverflow is excellent. Anything more complicated it is more hit or miss.

Many programming languages have their own question boards. Stata and SPSS are ones I am familiar with and tend to receive good responses (I still actively participate in the SPSS board). If I’m interested in learning some new command/library in Stata, I often just search the forum for posts related to it to check it out in the wild.

For programming questions, it is often useful to create a minimal reproducible example to describe the problem, show what the input data looks like and how you want the output data to look like. (In fact on the forums I link to you will almost always be asked explicitly to do that.)

For Crime Analysts

In similar spirit to the crim slack channel, Police Rewired has a Discord group for crime analysts (not 100% sure who started it, Andreas Varotsis is one of the people involved though). So that was founded by some UK analysts, but there are US analysts participating as well (and the problems folks deal with are very similar, so no real point in making a distinction between US/UK).

For crime analysts in the US, you should likely join either the IACA or a local crime analyst network. Many of the local ones come bundled, so if you join the Texas analyst network TXLEAN you also automatically get an IACA membership. To join is cheap (especially for current students). IACA has also started a user question forum as well.

For folks looking to get an entry level gig, the IACA has a job board that is really good. So it is worth the $10 just for that. They have various other intro resources though as well. For current BA/Masters students who are looking to get a job, I also suggest applying to private sector analyst jobs as well. They are mostly exchangeable with a crime analyst role. (Think more excel jockey than writing detailed statistic programming.)

How I learn to code

What prompted this blog post is that I’ve gotten asked by maybe 5 different people in the past month or so asking for resources to learn about statistical programming. And honestly I do not have a good answer, I’ve never really sat down with a book and learned a statistical software (tried on a few occasions and failed). I’m always just project focused.

So I wanted to do an example conjunctive analysis, or deep learning with pytorch, or using conformal prediction intervals to generate synthetic control estimates, etc. So I just sat down and figured out how to do those specific projects using various resources around the internet. One of my next personal projects is going to estimate prediction intervals for logistic multilevel models using Julia (based on this very nice set of intros to the language). I also need to get a working familiarity with Tableau. (Both are related to projects I am doing at work.) So expect to see a Tableau dashboard on the blog sometime in the near future!

Also many statistical programming languages are pretty much exchangeable for the vast majority of tasks people do. You can see that I have example blog posts for Excel, Access/SQL, R, SPSS, Stata, python, and ArcGIS. Just pick one and figure it for a particular project.

For criminologists, I have posted my Phd research course materials, and for Crime Analysts I have posted my GIS Class and my Crime Analysis course materials (although the GIS course is already out of date, it uses Arc Desktop instead of ArcPro). I don’t suggest you sit down and go through the courses though page-by-page. What I more suggest is look at the table of contents, see if anything strikes your fancy, read that particular lecture/code, and if you want to apply it to your own projects try to work it out. (At least that is how I go about learning coding.)

If you want more traditional learning materials for learning how to do code (e.g. textbooks or online courses), I suggest you ask folks on those forums I mentioned. They will likely be able to provide much better advice than I would.

To end it is totally normal to want to ask questions, get advice, or get feedback. Both my experience in Academia and in Crime Analysis it can be very lonely (I was in a small department, so was the only analyst). Folks on these forums are happy to help and connect.

Using Association Rules to Conduct Conjunctive Analysis

I’ve suggested to folks a few times in the past that a popular analysis in CJ, called conjunctive analysis (Drawve et al., 2019; Miethe et al., 2008; Hart & Miethe, 2015), could be automated in a fashion using a popular machine learning technique called association rules. So I figured a blog post illustrating it would be good.

I was motivated by some recent work by Nix et al. (2019) examining officer involved injuries in NIBRS data. So I will be doing a relevant analysis (although not as detailed as Justin’s) to illustrate the technique.

This ended up being quite a bit of work. NIBRS is complicated, and I had to do some rewrites of finding frequent itemsets to not run out of memory. I’ve posted the python code on GitHub here. So this blog post will be just a bit of a nicer walkthrough. I also have a book chapter illustrating geospatial association rules in SPSS (Wheeler, 2017).

A Brief Description of Conjunctive Analysis

Conjunctive analysis is more of an exploratory technique examining high cardinality categorical sets. Or in other words, you search though a database of cases that have many categories to find “interesting” patterns. It is probably easier to see an example than for me to describe it. Here is an example from Miethe et al. (2008):

You can see that here they are looking at characteristics of drug offenders, and then trying to identify particular sets of characteristics that influence the probability of a prison sentence. So this is easy to do in one dimension, it gets very difficult in multiple dimensions though.

Association rules were created for a very different type of problem – identifying common sets of items that shoppers buy together at the same time. But you can borrow that work to aid in conducting conjunctive analysis.

Data Prep for NIBRS

So here I am using 2012 NIBRS data to conduct analysis. Like I mentioned, I was motivated by the Nix and company paper examining officer injuries. They were interested in specifically examining officer involved injuries, and whether the perception that domestic violence cases were more dangerous for officers was justified.

For brevity I only ended up examining five different variable sets in NIBRS (Justin has quite a few more in his paper):

  • assault (or injury) type V4023
  • victim/off relationship V4032
  • ucr type V2006
  • drug use V2009 (also includes computer use!)
  • weapon V2017

All of these variables have three different item sets in the NIBRS codes, and many categories. You will have to dig into the python code, 00_AssocRules.py in the GitHub page to see how I recoded these variables.

Also maybe of interest I have some functions to do one-hot encoding of wide data. So a benefit of NIBRS is that you can have multiple crimes in one incident. So e.g. you can have one incident in which an assault and a burglary occurs. I do the analysis in a way that if you have common co-crimes they would pop out.

Don’t take this as very formal though. Justin’s paper which used 2016 NIBRS data only had 1 million observations, whereas here I have over 5 million (so somewhere along the way me and Justin are using different units of analysis). Also Justin’s incorporates dozens of other different variables into the analysis I don’t here.

It ends up being that with just these four variables (and the reduced sets of codes I created), there still end up being 34 different categories in the data.

Analysis of Frequent Item Sets

The first part of conjunctive analysis (or association rules) is to identify common item sets. So the work of Hart/Miethe is always pretty vague about how you do this. Association rules has the simple approach that you find any item sets, categories in which a particular itemset meets an arbitrary threshold.

So the way you represent the data is exactly how the prior Miethe et al. (2008) data showed, you create a series of dummy 0/1 variables. Then in association rules you look for sets in which for different cases all of the dummy variables take the value of 1.

The code 01_AssocRules.py on GitHub shows this going from the already created dummy variable data. I ended up writing my own function to do this, as I kept getting out of memory errors using the mlextend library. (I don’t know if this is due to my data is large N but smaller number of columns.) You can see my freq_sets function to do this.

Typically in association rules you identify item sets that meet a particular support threshold. Support here just means the proportion of cases that those items co-occur. E.g. if 20% of cases of assault also have a weapon of fists listed. Instead though I wrote the code to have a minimum N, which I choose here to be 1000 cases. (So out of 5 million cases, this is a support of 1/5000.)

I end up finding a total of 411 frequent item sets in the data that have at least 1000 cases (out of the over 5 million). Here are a few examples, with the frequencies to the left. So there are over 2000 cases in the 2012 NIBRS data that had a known relationship between victim/offender, resulted in assault, the weapon used was fists (or kicking), and involved computer use in some way. I only end up finding two itemsets that have 5 categories and that is it, there are no higher sets of categories that have at least 1000 cases in this dataset.

3509    {'rel_Known', 'ucr_Assault', 'weap_Fists', 'ucr_Drug'}
2660    {'rel_Known', 'ucr_Assault', 'weap_Firearm', 'ucr_WeaponViol'}
2321    {'rel_Known', 'ucr_Assault', 'weap_Fists', 'drug_ComputerUse'}
1132    {'rel_Known', 'ucr_Assault', 'weap_Fists', 'weap_Knife'}
1127    {'ucr_Assault', 'weap_Firearm', 'weap_Fists', 'ucr_WeaponViol'}
1332    {'rel_Known', 'ass_Argument', 'rel_Family', 'ucr_Assault', 'weap_Fists'}
1416    {'rel_Known', 'rel_Family', 'ucr_Assault', 'weap_Fists', 'ucr_Vandalism'}

Like I said I was interested in using NIBRS because of the Nix example. One way we can then examine what variables are potentially related to officer involved injuries during a commission of a crime would be to just pull out any itemsets which include the variable of interest, here ass_LEO_Assault.

4039    {'ass_LEO_Assault'}
1232    {'rel_Known', 'ass_LEO_Assault'}
4029    {'ucr_Assault', 'ass_LEO_Assault'}
1856    {'ass_LEO_Assault', 'weap_Fists'}
1231    {'rel_Known', 'ucr_Assault', 'ass_LEO_Assault'}
1856    {'ucr_Assault', 'ass_LEO_Assault', 'weap_Fists'}

So we see there are a total of just over 4000 officer assaults in the dataset. Unsurprisingly almost all of these also had an UCR offense of assault listed (4029 out of 4039).

Analysis of Association Rules

Sometimes just identifying the common item sets is what is of main interest in conjunctive analysis (see Hart & Miethe, 2015 for an example of examining the geographic characteristics of crime events).

But the apriori algorithm is one way to find particular rules that are of the form if A occurs then B occurs quite often, but swap out more complicated itemsets in the antecedent (A) and consequent (B) in the prior statement, and different ways of quantifying ‘quite often’.

I prefer conditional probability notation to the more typical association rule one, but for typical rules we have (here I use A for antecedent and B for consequent):

  • confidence: P(A & B) / P(B). So if the itemset of just B occurs 20% of the time, and the itemset of A and B together occurs 10% of the time, the confidence would be 50%. (Or more simply the probability of B conditional on A, P(B | A)).
  • lift: confidence(A,B) / P(B). This is a ratio of the baseline a category occurs for the denominator, and the numerator is the prior confidence category. So if you have a baseline B occurring 25% of the time, and the confidence of A & B is 50%, you would then have a lift of 2.

There are other rules as well that folks use, but those are the most common two I am interested in.

So for example in this data if I draw out rules that have a lift of over 2, I find rules like {'ucr_Vandalism', 'rel_Family'} -> {'ass_Argument'} produces a lift of over 6. (I can use the mlextend implementation here in this code, it was only the frequent itemsets code that was giving me problems.) So it ends up being arguments are listed in the injury codes around 1.6% of the time, but when you have a ucr crime of vandalism, and the relationship between victim/offender are family members, injury type of argument happens around 10.5% of the time (so 10.5/1.6 ~= 6).

The original use case for this is recommender systems/market analysis (so say if you see someone buy A, give them a coupon for B). So this ends up being not so interesting in this NIBRS example when you have you have more clear cause-effect type relationships criminologists would be interested in. But I describe in the next section some further potential machine learning models that may be more relevant, or how I might in the future amend the apriori algorithm for examining specific outcomes.

Further Notes

If you have a particular outcome you are interested in a specific outcome from the get go (so not so much totally exploratory analysis as here), there are a few different options that may make more sense than association rules.

One is the RuleFit algorithm, which basically just uses a regularized regression to find simple models and low order interactions. An example of this idea using police stop data is in Goel et al. (2016). These are very similar in the end to simple decision trees, you can also have continuous covariates in the analysis and it splits them into binary above/below rules. So you could say do RTM distance analysis, and still have it output a rule if < 1000 ft predict high risk. But they are fit in a way that tend to behave better out of sample than doing simple decision trees.

Another is fitting a more complicated model, say random forests, and then having reduced form summaries to describe those models. I have some examples of using shapely values for spatial crime prediction in Wheeler & Steenbeek (2020), but for a more if-then type sets of rules you could look at Scoped Rules.

I may need to dig into the association rules code some more though, and try to update the code to take the sample sizes and statistical significance into account for a particular outcome variable. So if you find higher lift in a four set predicting a particular outcome, you search the tree for more sets with a smaller support in the distribution. (I should probably also work on some cool network viz. to look at all the different rules.)

References