Querying OSM places data in python

For updates on other blogs, we have:

For a quick post here, Simon Willison has a recent post on using duckdb to query OSM data made available by the Overture foundation. I have written in the past scraping places data (gas stations, stores, etc., often used in spatial crime analysis) from Google (or other sources), so this is a potentially free source.

So I was interested to check out the data, it is quite easy to download given the ability to query parquet data files online. Simon in his post said it was taking awhile though, and this example downloading data from Raleigh took around 25 minutes. So no good for a live API, but fine for a batch job.

This python is simple enough to just embed in a blog post:

import duckdb
import pandas as pd
from datetime import datetime
import requests

# setting up in memory duckdb + extensions
db = duckdb.connect()
db.execute("INSTALL spatial")
db.execute("INSTALL httpfs")
db.execute("""LOAD spatial;
LOAD httpfs;
SET s3_region='us-west-2';""")

# Raleigh bound box from ESRI API
ral_esri = "https://maps.wakegov.com/arcgis/rest/services/Jurisdictions/Jurisdictions/MapServer/1/query?where=JURISDICTION+IN+%28%27RALEIGH%27%29&returnExtentOnly=true&outSR=4326&f=json"
bbox = requests.get(ral_esri).json()['extent']

# check out https://overturemaps.org/download/ for new releases
places_url = "s3://overturemaps-us-west-2/release/2023-07-26-alpha.0/theme=places/type=*/*"
query = f"""
SELECT
  *
FROM read_parquet('{places_url}')
WHERE
  bbox.minx > {bbox['xmin']}
  AND bbox.maxx < {bbox['xmax']} 
  AND bbox.miny > {bbox['ymin']} 
  AND bbox.maxy < {bbox['ymax']}
"""

# Took me around 25 minutes
print(datetime.now())
res = pd.read_sql(query,db)
print(datetime.now())

And this currently queries 29k places in Raleigh. Places can have multiple categories, so here I just slice out the main category and check it out:

def extract_main(x):
    return x['main']

res['main_cat'] = res['categories'].apply(extract_main)

res['main_cat'].value_counts().head(30)

And this returns

>>> res['main_cat'].value_counts().head(30)
beauty_salon                         1291
real_estate_agent                     657
landmark_and_historical_building      626
church_cathedral                      567
community_services_non_profits        538
professional_services                 502
real_estate                           452
hospital                              405
automotive_repair                     396
dentist                               350
lawyer                                316
park                                  308
insurance_agency                      298
public_and_government_association     288
spas                                  265
financial_service                     261
gym                                   260
counseling_and_mental_health          256
religious_organization                240
car_dealer                            214
college_university                    185
gas_station                           179
hotel                                 170
contractor                            169
pizza_restaurant                      167
barber                                161
shopping                              160
grocery_store                         160
fast_food_restaurant                  160
school                                158

I can’t say anything about the coverage of this data. Looking nearby my house it appears pretty well filled in. There are additional pieces of info in the OSM data as well, such as a confidence score.

So definately a ton of potential to use that as a nice source for reproducible crime analysis (it probably has the major types of places most people are interested in looking at). But I would do some local checks for your data before wholesale recommending using the open street map data over an official business directory (if available – but that may not include things like ATMs) or Google Places API data (but this is free!)

Downloading Police Employment Trends from the FBI Data Explorer

The other day on the IACA forums, an analyst asked about comparing her agencies per-capita rate for sworn/non-sworn compared to other agencies. This is data available via the FBI’s Crime Data Explorer. Specifically they have released a dataset of employment rates, broken down by various agencies, over time.

The Crime Data Explorer to me is a bit difficult to navigate, so this post is going to show using the API to query the data in python (maybe it is easier to get via direct downloads, I am not sure). So first, go to that link above and sign up for a free API key.

Now, in python, first the API works via asking for a specific agencies ORI, as well as date ranges. (You can do a query for national and overall state as well, but I would rarely want those levels of aggregation.) So first we are just going to grab all of the agencies across 50 states. This runs fairly fast, only takes a few minutes:

import pandas as pd
import requests

key = 'Insert your key here'

state_list = ("AL", "AK", "AZ", "AR", "CA", "CO", "CT", "DE", "FL", "GA",
              "HI", "ID", "IL", "IN", "IA", "KS", "KY", "LA", "ME", "MD",
              "MA", "MI", "MN", "MS", "MO", "MT", "NE", "NV", "NH", "NJ",
              "NM", "NY", "NC", "ND", "OH", "OK", "OR", "PA", "RI",
              "SC","SD","TN","TX","UT","VT","VA","WA","WV","WI","WY","DC")

# Looping over states, getting all of the ORIs
fin_data = []
for s in states:
    url = f'https://api.usa.gov/crime/fbi/cde/agency/byStateAbbr/{s}?API_KEY={key}'
    data = requests.get(url)
    fin_data.append(pd.DataFrame(data.json()))

agency = pd.concat(fin_data,axis=0).reset_index(drop=True)

And the agency dataframe has just a few shy of 19k ORI’s listed. Unfortunately this does not have much else associated with the agencies (such as the most recent population). It would be nice if this list had population counts (so if you just wanted to compare yourself to other similar size agencies), but alas it does not. So the second part here – scraping all 18,000+ agencies, takes a bit (let it run overnight).

# Now grabbing the full employment data
ystart = 1960   # some have data going back to 1960
yend = 2022
emp_data = []

# try/catch, as some of these can fail
for i,o in enumerate(agency['ori']):
    print(f'Getting agency {i+1} out of {agency.shape}')
    url = ('https://api.usa.gov/crime/fbi/cde/pe/agency/'
          f'{o}/byYearRange?from={ystart}&to={yend}&API_KEY={key}')
    try:
        data = requests.get(url)
        emp_data.append(pd.DataFrame(data.json()))
    except:
        print(f'Failed to query {o}')

emp_pd = pd.concat(emp_data).reset_index(drop=True)
emp_pd.to_csv('EmployeePoliceData.csv',index=False)

And that will get you 100% of the employee data on the FBI data explorer, including data for 2022.

To plug my consulting firm here, this is something that takes a bit of work. If you have longer running scraping jobs, I paired this code example down to be quite minimial, but you want to periodically save results and have the code be able to run from the last save point. So if you scrape 1000 agencies, your internet goes out, you don’t want to have to start from 0, you want to start from the last point you left off.

If that is something you need, it makes sense to send me an email to see if I can help. For that and more, check out my website, crimede-coder.com:

Too relaxed? Naive Bayes does not improve recidivism forecasting in the NIJ challenge

So the paper Improving Recidivism Forecasting With a Relaxed Naïve Bayes Classifier (Lee et al., 2023), recently published in Crime & Delinquency, has incorrect results. Note I am not sandbagging on the authors, I reviewed this paper for JQC and Journal of Criminal Justice, so I have given the authors this same feedback already (multiple times!). The authors however did not correct their results, and just journal shopped and published the wrong findings.

I have replication code here to review. (Note I initially made a mistake in my code replication, reversed calculating p(x|y), I calculated p(y|x) by accident, see this older code I shared in my prior reviews, but I was still correct in my assertion that Lee’s results were wrong.)

So the main thing that made me go to this effort, the authors report unbelieveable results. They report Brier Scores for Females (Round 1) of 0.104 and for males 0.159 – these scores blow the competition out of the water. The leaderboard was 0.15 for Females and 0.19 for males. Note how I don’t list to the third decimal – the difference between the teams you needed to go down that low. Lee also reports unbelievably low Brier scores for the alternative logit and random forest models – their results just on their face are not believable.

If the authors really believe their results this kind of sucks for them they did not participate in the NIJ challenge, they would have won more than $150,000! But I am pretty sure they are miscalculating their Brier scores somewhere. My replication code shows them in the same ballpark as everyone else, but they would not have made the leaderboard. Here are my estimates of what their Brier scores should be reported as (the Brier column below in the two tables):

Folks can go and look at their paper and their set of spreadsheets in the supplemental material – I have posted not many more than 50 lines of (non-comment) python code that replicates their regression model coefficients and shows their Brier scores are wrong though. (And subsequently any points Lee et al. 2023 make about fairness are thus wrong as well.)

NIJ probably released papers at some point, but if you want to see other folks discussion, there is Circo & Wheeler (2022) (for mine and Gio’s results for team MCHawks), and Mohler & Porter (2021) for team PASDA.

I may put in the slate sometime to discuss naive Bayes (and other categorical encoding schemes). It is not a bad idea for data with many categories, but for this NIJ data there just isn’t that much to squeeze out of the data. So any future work will be unlikely to dramatically improve upon the competition results (it is difficult to overfit this data). Again given my analysis here, I am pretty sure a valid data analysis (not peeking) at best will “beat” the competition results in the 3rd decimal place (if they can improve at all).

Now part of the authors argument is that this method (relaxed naive Bayes) results in simpler interpretations. Typically people interpret “simple” models in terms of the end results, e.g. having a simple checklist of integer weights. The more I deal with predictive models though, I think this is maybe misguided. You could also interpret “simple” in terms of the code used for how someone derived the weights (and evaluated the final metrics). This is important when auditing code that others have written, as you will ultimately take the code and apply it to your data.

I think this “simpler to estimate the same results” is probably more important for scientists and outside groups wanting to verify the integrity of any particular machine learning model than “simple end result weights”. Otherwise scientists can make up results and say my method is better. Which is simpler I suppose, but misses the boat a bit in terms of why we want simple models to begin with.

References

Some notes on synthetic control and Hogan/Kaplan

This will be a long one, but I have some notes on synthetic control and the back-and-forth between two groups. So first if you aren’t familiar, Tom Hogan published an article on how the progressive District Attorney (DA) in Philadelphia, Larry Krasner, in which Hogan estimates that Krasner’s time in office contributed to a large increase in the number of homicides. The control homicides are estimated using a statistical technique called synthetic control, in which you derive estimates of the trend in homicides to compare Philly to based on a weighted average of comparison cities.

Kaplan and colleagues (KNS from here on) then published a critique of various methods Hogan used to come up with his estimate. KNS provided estimates using different data and a different method to derive the weights, showing that Philadelphia did not have increased homicides post Krasner being elected. For reference:

Part of the reason I am writing this is if people care enough, you could probably make similar back and forths on every synth paper. There are many researcher degrees of freedom in the process, and in turn you can make reasonable choices that lead to different results.

I think it is worthwhile digging into those in more detail though. For a summary of the method notes I discuss for this particular back and forth:

  • Researchers determine the treatment estimate they want (counts vs rates) – solvers misbehaving is not a reason to change your treatment effect of interest
  • The default synth estimator when matching on counts and pop can have some likely unintended side-effects (NYC pretty much has to be one of the donor cities in this dataset)
  • Covariate balancing is probably a red-herring (so the data issues Hogan critiques in response to KNS are mostly immaterial)

In my original draft I had a note that this post would not be in favor of Hogan nor KNS, but in reviewing the sources more closely, nothing I say here conflicts with KNS (and I will bring a few more critiques of Hogan’s estimates that KNS do not mention). So I can’t argue much with KNS’s headline that Hogan’s estimates are fatally flawed.

An overview of synthetic control estimates

To back up and give an overview of what synth is for general readers, imagine we have a hypothetical city A with homicide counts 10 15 30, where the 30 is after a new DA has been elected. Is the 30 more homicides than you would have expected absent that new DA? To answer this, we need to estimate a counterfactual trend – what the homicide count would have been in a hypothetical world in which a new progressive DA was not elected. You can see the city homicides increased the prior two years, from 10 to 15, so you may say “ok, I expected it to continue to increase at the same linear trend”, in which case you would have expected it to increase to 20. So the counterfactual estimated increase in that scenario is observed - counterfactual, here 30 - 20 = 10, an estimated increase of 10 homicides that can be causally attributed to the progressive DA.

Social scientists tend to not prefer to just extrapolate prior trends from the same location into the future. There could be widespread changes that occur everywhere that caused the increase in city A. If homicide rates accelerated in every city in the country, even those without a new progressive DA, it is likely something else is causing those increases. So say we compare city A to city B, and city B had a homicide count trend during the same time period 10 15 35. Before the new DA in city A, cities A/B had the same pre-trend (both 10 15). The post time period City B increased to 35 homicides. So if using City B as the counterfactual estimate, we have the progressive DA reduced 5 homicides, again observed - counterfactual = 30 - 35 = -5. So even though city A increased, it increased less than we expected based on the comparison city B.

Note that this is not a hypothetical concern, it is pretty basic one that you should always be concerned about when examining macro level crime data. There has been national level homicide increases over the time period when Krasner has been in office (Yim et al, 2020, and see this blog post for updates. U.S. city homicide rates tend to be very correlated with each other (McDowall & Loftin, 2009).

So even though Philly has increased in homicide counts/rates when Krasner has been in office, the question is are those increases higher or lower than we would expect. That is where the synthetic control method comes in, we don’t have a perfect city B to compare to Philadelphia, so we create our own “synthetic” counter-factual, based on a weighted average of many different comparison cities.

To make the example simple, imagine we have two potential control cities and homicide trends, city C1 0 30 20, and city C2 20 0 30. Neither looks like a good comparison to city A that has trends 10 15 30. But if we do a weighted average of C1 and C2, with the weights 0.5 for each city, when combined they are a perfect match for the two pre-treatment periods:

C0  C1 Average cityA
 0  20   10     10
30   0   15     15
20  30   25     30

This is what the synthetic control estimator does, although instead of giving equal weights it determines the optimal weights to match the pre-treatment time period given many potential donors. In real data for example C0 and C1 may be given weights of 0.2 and 0.8 to give the correct balance based on the prior to treatment time periods.

The fundamental problem with synth

The rub with estimating the synth weights is that there is no one correct way to estimate the weights – you have more numbers to estimate than data points. In the Hogan paper, he has 5 pre time periods, 2010-2014, and he has 82 potential donors (99 other of the largest cities in the US minus 17 progressive prosecutors). So you need to learn 82 numbers (the weights) based on 5 data points.


Side note: you can also consider matching on covariates additional data points, although I will go into more detail on how matching on covariates is potentially a red-herring. Hogan I think uses an additional 5*3=15 time varying points (pop, cleared homicide, homicide clearance rates), and maybe 3 additional time invariant (median income, 1 prosecutor categorization, and homicides again!). So maybe has 5 + 15 + 3 = 23 data points to match on (so same fundamental problem, 23 numbers to learn 82 weights). I am just going to quote the full passage on Hogan (2022a) here where he discusses covariate matching:

The number of homicides per year is the dependent variable. The challenge with this synthetic control model is to use variables that both produce parallel trends in the pre-period and are sufficiently robust to power the post-period results. The model that ultimately delivered the best fit for the data has population, cleared homicide cases, and homicide clearance rates as regular predictors. Median household income is passed in as the first special predictor. The categorization of the prosecutors and the number of homicides are used as additional special predictors. For homicides, the raw values are passed into the model. Abadie (2021) notes that the underlying permutation distribution is designed to work with raw data; using log values, rates, or other scaling techniques may invalidate results.

This is the reason why replication code is necessary – it is very difficult for me to translate this to what Hogan actually did. “Special” predictors here are code words for the R synth package for time-invariant predictors. (I don’t know based on verbal descriptions how Hogan used time-invariant for the prosecutor categorization for example, just treats it as a dummy variable?) Also only using median income – was this the only covariate, or did he do a bunch of models and choose the one with the “best” fit (it seems maybe he did do a search, but doesn’t describe the search, only the end selected result).

I don’t know what Hogan did or did not do to fit his models. The solution isn’t to have people like me and KNS guess or have Hogan just do a better job verbally describing what he did, it is to release the code so it is transparent for everyone to see what he did.


So how do we estimate those 82 weights? Well, we typically have restrictions on the potential weights – such as the weights need to be positive numbers, and the weights should sum to 1. These are for a mix of technical and theoretical reasons (having the weights not be too large can reduce the variance of the estimator is a technical reason, we don’t want negative weights as we don’t think there are bizzaro comparison areas that have opposite world trends is a theoretical one).

These are reasonable but ultimately arbitrary – there are many different ways to accomplish this weight estimation. Hogan (2022a) uses the R synth package, KNS use a newer method also advocated by Abadie & L’Hour (2021) (very similar, but tries to match to the closest single city, instead of weights for multiple cities). Abadie (2021) lists probably over a dozen different procedures researchers have suggested over the past decade to estimate the synth weights.

The reason I bring this up is because when you have a problem with 82 parameters and 5 data points, the problem isn’t “what estimator provides good fit to in-sample data” – you should be able to figure out a estimator that accomplishes good in-sample fit. The issue is whether that estimator is any good out-of-sample.

Rates vs Counts

So besides the estimator used, you can break down 3 different arbitrary researcher data decisions that likely impact the final inferences:

  • outcome variable (homicide counts vs homicide per capita rates)
  • pre-intervention time periods (Hogan uses 2010-2014, KNS go back to 2000)
  • covariates used to match on

Lets start with the outcome variable question, counts vs rates. So first, as quoted above, Hogan cites Abadie (2021) for saying you should prefer counts to rates, “Abadie (2021) notes that the underlying permutation distribution is designed to work with raw data; using log values, rates, or other scaling techniques may invalidate results.”

This has it backwards though – the researcher chooses whether it makes sense to estimate treatment effects on the count scale vs rates. You don’t goal switch your outcome because you think the computer can’t give you a good estimate for one outcome. So imagine I show you a single city over time:

        Y0    Y1    Y2
Count   10    15    20
Pop   1000  1500  2000

You can see although the counts are increasing, the rate is consistent over the time period. There are times I think counts make more sense than rates (such as cost-benefit analysis), but probably in this scenario the researcher would want to look at rates (as the shifting denominator is a simple explanation causing the increase in the counts).

Hogan (2022b) is correct in saying that the population is not shifting over time in Philly very much, but this isn’t a reason to prefer counts. It suggests the estimator should not make a difference when using counts vs rates, which just points to the problematic findings in KNS (that making different decisions results in different inferences).

Now onto the point that Abadie (2021) says using rates is wrong for the permutation distribution – I don’t understand what Hogan is talking about here. You can read Abadie (2021) for yourself if you want. I don’t see anything about the permutation inferences and rates.

So maybe Hogan mis-cited and meant another Abadie paper – Abadie himself uses rates for various projects (he uses per-capita rates in the 2021 cited paper, Abadie et al., (2010) uses rates for another example), so I don’t think Abadie thinks rates are intrinsically problematic! Let me know if there is some other paper I am unaware of. I honestly can’t steelman any reasonable source where Hogan (2022a) came up with the idea that counts are good and rates are bad though.

Again, even if they were, it is not a reason to prefer counts vs rates, you would change your estimator to give you the treatment effect estimate you wanted.


Side note: Where I thought the idea with the problem with rates was going (before digging in and not finding any Abadie work actually saying there is issues with rates), was increased variance estimates with homicide data. So Hogan (2022a) estimates for the synth weights Detroit (0.468), New Orleans (NO) (0.334), and New York City (NYC) (0.198), here are those cities homicide rates graphed (spreadsheet with data + notes on sources).

You can see NO’s rate is very volatile, so is not a great choice for a matched estimator if using rates. (I have NO as an example in Wheeler & Kovandzic (2018), that much variance though is fairly normal for high crime not too large cities in the US, see Baltimore for example for even more volatility.) I could forsee someone wanting to make a weighted synth estimator for rates, either make the estimator a population weighted average, or penalize the variance for small rates. Maybe you can trick microsynth to do a pop weighted average out of the box (Robbins et al., 2017).


To discuss the Hogan results specifically, I suspect for example NYC being a control city with high weight in the Hogan paper, which superficially may seem good (both large cities on the east coast), actually isn’t a very good control area considering the differences in homicide trends (either rates or counts) over time. (I am also not so sure about describing NYC and New Orlean’s as “post-industrial” by Hogan (2022a) either. I mean this is true to the extent that all urban areas in the US are basically post-industrial, but they are not rust belt cities like Detroit.)

Here is for reference counts of homicides in Philly, Detroit, New Orleans, and NYC going back further in time:

NYC is such a crazy drop in the 90s, lets use the post 2000 data that KNS used to zoom in on the graph.

I think KNS are reasonable here to use 2000 as a cut point – it is more empirical based (post crime drop), in which you could argue the 90’s are a “structural break”, and that homicides settled down in most cities around 2000 (but still typically had a gradual decline). Given the strong national homicide trends though across cities (here is an example I use for class, superimposing Dallas/NYC/Chicago), I think using even back to the 60’s is easily defensible (moreso than limiting to post 2010).

It depends on how strict you want to be whether you consider these 3 cities “good” matches for the counts post 2010 in Hogan’s data. Detroit seems a good match on the levels and ok match on trends. NO is ok match on trends. NYC and NO balance each other in terms of matching levels, NYC has steeper declines though (even during the 2010-2014 period).

The last graph though shows where the estimated increases from Hogan (2022a) come from. Philly went up and those 3 other cities went down from 2015-2018 (and had small upward bumps in 2019).

Final point in this section, careful what you wish for with sparse weights and sum to 1 in the synth estimate. What this means in practice when using counts and matching on pop size, is that you need lines that are above and below Philly on those dimensions. So to get a good match on Pop, it needs to select at least one of NYC/LA/Houston (Chicago was eliminated due to having a progressive prosecutor). To get a good match on homicide counts, it also has to pick at least one city with more homicides per year as well, which limits the options to New York and Detroit (LA/Houston have lower overall homicide counts to Philly).

You can’t do the default Abadie approach for NYC for example (matching on counts and pop) – it will always have a bad fit when using comparison cities in the US as the donor pool. You either need to allow the weights to sum to larger than 1, or the lasso approach with an intercept is another option (so you only match on trend, not levels).

Because matching on trends is what matters for proper identification in this design, not levels, this is all sorts of problematic with the data at hand. (This is also a potential problem with the KNS estimator as well. KNS note though they don’t trust their estimate offhand, their reasonable point is that small changes in the design result in totally different inferences.)

Covariates and Out of Sample Estimates

For sake of argument, say I said Hogan (2022a) is bunk, because it did not match on “per-capita annual number of cheese-steaks consumed”. Even though on its face this covariate is non-sense, how do you know it is non-sense? In the synthetic control approach, there is no empirical, falsifiable way to know whether an covariate is a correct one to match on. There is no way to know that median income is better than cheese-steaks.

If you wish for more relevant examples, Philly has obviously more issues with street consumption of opioids than Detroit/NOLA/NYC, which others have shown relationships to homicide and has been getting worse over the time Krasner has been in office (Rosenfeld et al., 2023). (Or more simply social disorganization is the more common way that criminologists think about demographic trends and crime.)

This uncertainty in “what demographics to control for” is ok though, because matching on covariates is neither necessary nor sufficient to ensure you have estimated a good counter-factual trend. Abadie in his writings intended for covariates to be more like fuzzy guide-rails – they are qualitative things that you think the comparison areas should be similar on.

Because there are effectively an infinite pool of potential covariates to match on, I prefer the approach of simply limiting the donor pool apriori – Hogan limiting to large cities is on its face reasonable. Including other covariates is not necessary, and does not make the synth estimate more or less robust. Whether KNS used good or bad data for covariates is entirely a red-herring as to the quality of the final synth estimate.


Side note: I don’t doubt that Hogan got advice to not share data and code. It is certainly not normative in criminology to do this. It creates a bizarre situation though, in which someone can try to replicate Hogan by collating original sources, and then Hogan always comes back and says “no, the data you have are wrong” or “the approach you did is not exactly replicating my work”.

I get that collating data takes a long time, and people want to protect their ability to publish in the future. (Or maybe just limit their exposure to their work being criticized.) It is blatantly antithetical to verifying the scientific integrity of peoples work though.

Even if Hogan is correct though in the covariates that KNS used are wrong, it is mostly immaterial to the quality of the synth estimates. It is a waste of time for outside researchers to even bother to replicate Hogan’s covariates he used.


So I used the idea of empirical/falsifiable – can anything associated with synth be falsifiable? Why yes it can – the typical approach is to do some type of leave-one-out estimate. It may seem odd because synth estimates an underlying match to a temporal trend in the treated location, but there is nothing temporal about the synth estimate. You could jumble up the years in the pre-treatment sample and still would estimate the same weights.

Because of this, you can leave-a-year-out in the pre-treatment time period, run your synth algorithm, and then predict that left out year. A good synth estimator will be close to the observed value for the out of sample estimates in the pre-treated time period (and as a side bonus, you can use that variance estimate to estimate the error in the post-trend years).

That is a relatively simple way to determine if the Hogan 5 year vs KNS 15 year time periods are “better” synth controls (my money is on KNS for that one). Because Hogan has not released data/code, I am not going to go through that trouble. As I said in the side note earlier, I could try to do that, and Hogan could simply come back and say “you didn’t do it right”.

This also would settle the issue of “over-fit”. You actually cannot just look at the synth weights, and say that if they are sparse they are not over-fit and if not sparse are over-fit. So for reference, you have in Hogan essentially fitting 82 weights based on 5 datapoints, and he identified a fit with 3 non-zero weights. Flip this around, and say I had 5 data points and fit a model with 3 parameters, it is easily possible that the 3 parameter model in that scenario is overfit.

Simultaneously, it is not necessary to have a sparse weights matrix. Several alternative methods to estimate synth will not have sparse weights (I am pretty sure Xu (2017) will not have sparse weights, and microsynth estimates are not sparse either for just two examples). Because US cities have such clear national level trends, a good estimator in this scenario may have many tiny weights (where good here is low bias and variance out of sample). Abadie thinks sparse weights are good to make the model more interpretable (and prevent poor extrapolation), but that doesn’t mean by default a not sparse solution is bad.

To be clear, KNS admit that their alternative results are maybe not trustworthy due to not sparse weights, but this doesn’t imply Hogan’s original estimates are themselves “OK”. I think maybe a correct approach with city level homicide rate data will have non-sparse weights, due to the national level homicide trend that is common across many cities.

Wrapping Up

If Crim and Public Policy still did response pieces maybe I would go through that trouble of doing the cross validation and making a different estimator (although I would unlikely be an invited commenter). But wanted to at least do this write up, as like I said at the start I think you could do this type of critique with the majority of synth papers in criminology being published at the moment.

To just give my generic (hopefully practical) advice to future crim work:

  • don’t worry about matching on covariates, worry about having a long pre-period
  • the default methods you need to worry about if you have enough “comparable” units – this is in terms of levels, not just trends
  • the only way to know the quality of the modeling procedure in synth is to do out of sample estimates.

Bullet points 2/3 are perhaps not practical – most criminologists won’t have the capability to modify the optimization procedure to the situation at hand (I spent a few days trying without much luck to do my penalized variants suggested, sharing so others can try out themselves, I need to move onto other projects!) Also takes a bit of custom coding to do the out of sample estimates.

For many realistic situations though, I think criminologists need to go beyond just point and clicking in software, especially for this overdetermined system of equations synthetic control scenario. I did a prior blog post on how I think many state level synth designs are effectively underpowered (and suggested using lasso estimates with conformal intervals). I think that is a better default in this scenario as well compared to the typical synth estimators, although you have plenty of choices.

Again I had initially written this as trying to two side the argument, and not being for or against either set of researchers. But sitting down and really reading all the sources and arguments, KNS are correct in their critique. Hogan is essentially hiding behind not releasing data and code, and in that scenario can make an endless set of (ultimately trivial) responses of anyone who publishes a replication/critique.

Even if some of the the numbers KNS collated are wrong, it does not make Hogan’s estimates right.

References

Youtube interview with Manny San Pedro on Crime Analysis and Data Science

I recently did an interview with Manny San Pedro on his YouTube channel, All About Analysis. We discuss various data science projects I conducted while either working as an analyst, or in a researcher/collaborator capacity with different police departments:

Here is an annotated breakdown of the discussion, as well as links to various resources I discuss in the interview. This is not a replacement for listening to the video, but is an easier set of notes to link to more material on what particular item I am discussing.

0:00 – 1:40, Intro

For rundown of my career, went to do PhD in Albany (08-15). During that time period I worked as a crime analyst at Troy, NY, as well as a research analyst for my advisor (Rob Worden) at the Finn Institute. My research focused on quant projects with police departments (predictive modeling and operations research). In 2019 went to the private sector, and now work as an end-to-end data scientist in the healthcare sector working with insurance claims.

You can check out my academic and my data science CV on my about page.

I discuss the workshop I did at the IACA conference in 2017 on temporal analysis in Excel.

Long story short, don’t use percent change, use other metrics and line graphs.

7:30 – 13:10, Patrol Beat Optimization

I have the paper and code available to replicate my work with Carrollton PD on patrol beat optimization with workload equality constraints.

For analysts looking to teach themselves linear programming, I suggest Hillier’s book. I also give examples on linear programming on this blog.

It is different than statistical analysis, but I believe has as much applicability to crime analysis as your more typical statistical analysis.

13:10 – 14:15, Million Dollar Hotspots

There are hotspots of crime that are so concentrated, the expected labor cost reduction in having officers assigned full time likely offsets the position. E.g. if you spend a million dollars in labor addressing crime at that location, and having a full time officer reduces crime by 20%, the return on investment for hotspots breaks even with paying the officers salary.

I call these Million dollar hotspots.

14:15 – 28:25, Prioritizing individuals in a group violence intervention

Here I discuss my work on social network algorithms to prioritize individuals to spread the message in a focussed deterrence intervention. This is opposite how many people view “spreading” in a network, I identify something good I want to spread, and seed the network in a way to optimize that spread:

I also have a primer on SNA, which discusses how crime analysts typically define nodes and edges using administrative data.

Listen to the interview as I discuss more general advice – in SNA it matters what you want to accomplish in the end as to how you would define the network. So I discuss how you may want to define edges via victimization to prevent retaliatory violence (I think that would make sense for violence interupptors to be proactive for example).

I also give an example of how detective case allocation may make sense to base on SNA – detectives have background with an individuals network (e.g. have a rapport with a family based on prior cases worked).

28:25 – 33:15, Be proactive as an analyst and learn to code

Here Manny asked the question of how do analysts prevent their role being turned into more administrative role (just get requests and run simple reports). I think the solution to this (not just in crime analysis, but also being an analyst in the private sector) is to be proactive. You shouldn’t wait for someone to ask you for specific information, you need to be defining your own role and conducting analysis on your own.

He also asked about crime analysis being under-used in policing. I think being stronger at computer coding opens up so many opportunities that learning python, R, SQL, is the area I would like to see stronger skills across the industry. And this is a good career investment as it translates to private sector roles.

33:15 – 37:00, How ChatGPT can be used by crime analysts

I discuss how ChatGPT may be used by crime analysis to summarize qualitative incident data and help inform . (Check out this example by Andreas Varotsis for an example.)

To be clear, I think this is possible, but the tech I don’t think is quite up to that standard yet. Also do not submit LEO sensitive data to OpenAI!

Also always feel free to reach out if you want to nerd out on similar crime analysis questions!

Make more money

So I enjoy Ramit Sethi’s Netflix series on money management – fundamentally it is about money coming in and money going out and the ability to balance a budget. On occasion I see other budget coaches focus on trivial expenses (the money going out) whereas for me (and I suspect the majority of folks reading this blog with higher degrees and technical backgrounds) you should almost always be focused on finding a higher paying job.

Lets go with a common example people use as unnecessary discretionary spending – getting a $10 drink at Starbucks every day. If you do this, over the course of a 365 day year, you will have spent $3650 additional dollars. If you read my blog about coding and statistics and that expense bothers you, you are probably not making as much money as you should be.

Ramit regularly talks about asking for raises – I am guessing most people reading this blog if you got a raise it would be well over that Starbucks expense. But part of the motivation to write this post is in reference to formerly being a professor. I think many criminal justice (CJ) professors are underemployed, and should consider better paying jobs. I am regularly starting to see public sector jobs in CJ that have substantially better pay than being a professor. This morning was shared a position for an entry level crime analyst at the Reno Police Department with pay range from $84,000 to $102,000:

The low end of that starting pay range is competitive with the majority of starting assistant professor salaries in CJ. You can go check out what the CJ professors at Reno make (which is pretty par for the course for CJ departments in the US) in comparison. If I had stayed as a CJ professor, even with moving from Dallas to other universities and trying to negotiate raises, I would be lucky to be making over $100k at this point in time. Again, that Reno position is an entry level crime analyst – asking for a BA + 2 years of experience or a Masters degree.

Private sector data science jobs in comparison, in DFW area in 2019 entry level were often starting at $105k salary (based on personal experience). You can check out BLS data to examine average salaries in data science if you want to look at your particular metro area (it is good to see the total number in that category in an area as well).

While academic CJ salaries can sometimes be very high (over $200k), these are quite rare. There are a few things going against professor jobs, and CJ ones in particular, that depress CJ professor wages overall. Social scientists in general make less than STEM fields, and CJ departments are almost entirely in state schools that tend to have wage compression. Getting an offer at Harvard or Duke is probably not in the cards if you have a CJ degree.

In addition to this, with the increase in the number of PhDs being granted, competition is stiff. There are many qualified PhDs, making it very difficult to negotiate your salary as an early career professor – the university could hire 5 people who are just as qualified in your stead who aren’t asking for that raise.

So even if you are lucky enough to have negotiating power to ask for a raise as a CJ professor (which most people don’t have), you often could make more money by getting a public sector CJ job anyway. If you have quant skills, you can definitely make more money in the private sector.

At this point, most people go back to the idea that being a professor is the ultimate job in terms of freedom. Yes, you can pursue whatever research line you want, but you still need to teach courses, supervise students, and occasionally do service to the university. These responsibilities all by themselves are a job (the entry level crime analyst at Reno will work less overall than the assistant professor who needs to hustle to make tenure).

To me the trade off in freedom is worth it because you get to work directly with individuals who actually care what you do – you lose freedom because you need to make things within the constraints of the real world that real people will use. To me being able to work directly on real problems and implement my work in real life is a positive, not a negative.

Final point to make in this blog, because of the stiff competition for professor positions, I often see people suggesting there are too many PhDs. I don’t think this is the case though, you can apply the skills you learned in getting your CJ PhD to those public and private sector jobs. I think CJ PhD programs just need small tweaks to better prepare students for those roles, in addition to just letting people know different types of positions are available.

It is pretty much at the point that alt-academic jobs are better careers than the majority of CJ academic professor positions. If you had the choice to be an assistant professor in CJ at University of Nevada Reno, or be a crime analyst at Reno PD, the crime analyst is the better choice.

Javascript apps and ASEBP update

So for a quick update, my most recent post on ASEBP, This One Simple Trick Will Improve Attitudes Toward Police. (Note you need a ASEBP membership to read.) There are several recent studies by different groups showing follow up to victims, even if you won’t solve the crime in the end, improves overall attitudes towards police. Simple thing for PDs to do. See the reference list at the end of the post for various studies.

Besides that, no blog posts here recently as I have been working on my CRIME De-Coder site, in particular developing a few additional javascript demo’s. My most recent one is a social network app applying my dominant set algorithm (to prioritize call-ins in a group violence/focused deterrence intervention) (Wheeler et al., 2019).

The javascript apps are very nice, as they are all client side – my website just serves the text files, and your local browser does all the hard work. I don’t need to worry about dealing with LEO sensitive data in that scenario either.

I am still learning a ton of website development (will have some surveys deployed using PHP + google sheets here soonish on CRIME De-Coder). Debate on whether it is worth writing up blog posts here. The javascript network application is almost a 1:1 translate of my python code. Vectorized stuff I don’t know much about doing in javascript, but the network algorithm stuff is mostly just dictionaries, sets, and loops. If interested, you can just right click on the browser when the page is open and inspect the source.

References

  • Clark, B., Ariel, B., & Harinam, V. (2022). How Should the Police Let Victims Down? The Impact of Reassurance Call-Backs by Local Police Officers to Victims of Vehicle and Cycle Crimes: A Block Randomized Controlled Trial. Police Quarterly, Online First.
  • Curtis-Ham, S., & Cantal, C. (2022). Locks, lights, and lines of sight: an RCT evaluating the impact of a CPTED intervention on repeat burglary victimisation. Journal of Experimental Criminology, Online First.
  • Henning, Kris et al. 2023. The Impact of Online Crime Reporting on Community Trust, Police Chief Online, April 12, 2023
  • Wheeler, A. P., McLean, S. J., Becker, K. J., & Worden, R. E. (2019). Choosing representatives to deliver the message in a group violence intervention. Justice Evaluation Journal, 2(2), 93-117.

Dashboards are often not worth the effort

When end users see dashboards, they often think “this is really whiz-bang cool, lets do that for our data”. There are two issues though I commonly see with dashboards. One is the nature of the task to be accomplished with the dashboard is not well defined, and so even a visually well done dashboard mostly goes unused. The second, there are a ton of headaches deploying dashboards with real data – and the effort to do it right is not worth it.

For the first part, consider a small agency that has a simple crime trends dashboard. The intent is to identify anomalous upticks of crime. This requires someone log into the dashboard, click around the different crime trends, and visually seeing if they are higher than expected. It would be easier to either have an automated alert when some threshold is met, or a standardized report (e.g. once a week) that is emailed to review.

This post is not going to even be about ‘most dashboards show stupid data’ or ‘the charts show the data in totally inappropriate ways’. Even in cases in which you can build a nice dashboard, the complexity level is IMO not worth it in many situations I have encountered. Automating alerts and building regular standard reports for the vast majority of situations is in my opinion a better solution for data products. The whiz-bang being able to interactively click stuff will only be used by a very small number of people (who can often build stuff like that for themselves anyway).

So onto the second part, deploying dashboards with real data that others can interact with. Many of the cool example dashboards you see online do tricks that would be inappropriate for a production dashboard. So even if someone is like ‘yeah I can do a demo dashboard in a day on my laptop’, there is still a ton of more work to expose that dashboard to different individuals, which is probably the ultimate goal.

Dashboards have two parts: A) connecting to a source database, then B) exposing the user-interface to outside parties. Seems simple right? Wrong.

Lets start with part A, connecting to a source database. Many dashboard examples you see online cheat at this stage – they embed a static version of the data in the dashboard itself. If Gary needs to re-upload the data to the dashboard every week, what happens when Gary goes on vacation or takes a day off? You now have an out of date dashboard.

It is quite hard to manage connections between live source data and a dashboard. In the case you have a public facing dashboard, I would just host a public file somewhere on the internet (so anyone can download the source data), and point to that source, and then have some automated process update that source data. This solves the Gary went on vacation factor at least, and there is no security risk. You intentionally only upload data that can be disseminated to the public.

One potential alternative with the public facing dashboard is to make a serverless file. This will often embed the dashboard (and maybe data) into the webpage itself (e.g. pyscript wasm), so it may be slow to start up, but will work reasonably well if you can handle a one minute lag. You don’t need to worry about malicious actors in that scenario, as the heavy computation is done on the clients computer, not your server. I have an example on CRIME De-Coder (note it is currently not up to date, my code is running fine, but Dallas post the cyber-attack has not been updating their public data).

Managing a direct connection to your source data in a public facing dashboard is not a good idea, as malicious actors can spam your dashboard. This denial of service attack will not only make your dashboard unresponsive, but will also eat up your database processing. (Big companies often have a reporting database server vs a production database server in the best case scenario, but this requires resources most public sector agencies do not have.)

The solution to this is to limit who can access the dashboard, part B above. Unfortunately, the majority of dashboard software when you want to make a live connection and/or limit who can see the information, you are in the ‘need to pay for this scenario’. A very unfortunate aspect of the ‘you need to pay for this’ is that most of these vendors charge per viewer – it isn’t a flat fee. PowerBI is maybe OK if your organization already pays for Sharepoint, Tableau licensing is so painful I wouldn’t suggest it.

So what is the alternative? You are now in charge of spinning up your own server so people can click a dropdown menu and generate a line graph. Again you need to worry about security, but at least if you can hide behind a local network or VPN it is probably doable for most police departments.

I don’t want to say dashboards are never worth it. I think public facing dashboards are a good thing for police transparency, and if done right are easier to implement than private ones. Private ones are doable as well (especially if limiting to intranet applications). But like I said, this level of effort is over the top compared to just doing a regular report.

Updates on CRIME De-Coder and ASEBP

So I have various updates on my CRIME De-Coder consulting site, as well as new posts on the American Society of Evidence Based Policing Criminal Justician series.

CRIME De-Coder

Blog post, Don’t use percent change for crime data, use this stat instead. I have written a bunch here about using Poisson Z-scores, so if you are reading this it is probably old news. Do us all a favor and in your Compstat reports drop ridiculous percent change metrics with low baselines, and use 2 * ( sqrt(Current) - sqrt(Past) ).

Blog post, Dashboards should be up to date. I will have a more techy blog post here on my love/hate relationship with dashboards (most of the time static reports are a better solution). But one scenario they do make sense is for public facing dashboards, but they should be up to date. The “free” versions of popular tools (Tableau, PowerBI) don’t allow you to link to a source dataset and get auto-updated, so you see many old dashboards out of date online. If you contract with me, I can automate it so it is up to date and doesn’t rely on an analyst manually updating the information.

Demo page – that page currently includes demonstrations for:

The WDD Tool is pure javascript – picking up more of that slowly (the Folium map has a few javascript listener hacks to get it to look the way I want). As a reference for web development, I like Jon Duckett’s three books (HTML, javascript, PHP).

Ultimately too much stuff to learn, but on the agenda are figuring out google cloud compute + cloud databases a bit more thoroughly. Then maybe add some PHP to my CRIME De-Coder site (a nicer contact me form, auto-update sitemap, and rss feed). I also want to learn how to make ArcGIS dashboards as well.

Criminal Justician

The newest post is Situational crime prevention and offender planning – discussing one of my favorite examples of crime prevention through environmental design (on suicide prevention) and how it is a signal about offender behavior that goes beyond simplistic impulsive behavior. I then relate this back to current discussion of preventing mass shootings.

If you have ideas about potential posts for the society (or this blog or crime de-coders blog), always feel free to make a pitch

A statistical perspective on year-to-date metrics

Jerry Ratcliffe, and now more recently Jeff Asher, have written about how volatile early year projection of year-to-date (YTD) percent changes. I am going to write about this is not the right way to frame the problem in my opinion – I will present a better behaved estimate that is less volatile, but clearly doesn’t give police departments what they want.

Going to the end advice first – people find me irksome for the suggestion, but you shouldn’t be using percent changes at all. A simple alternative I have stated for low count crime data is a Poisson Z-score, which is simply 2*(sqrt(Current) - sqrt(Past)) – a value of greater than 3 or 4 is a signal the two processes are significantly different (under the null hypothesis that the counts have a Poisson distribution).

A Better YTD estimate

So here I am going to present a more accurate YTD percent change metric – but don’t take that as advice you should be using YTD percent change. It is more of an exercise to say why you shouldn’t be using this metric to begin with. Year end percent change is defined as:

(Current - Past)/Past = % Change

Note that you can rewrite this as:

Current/Past - Past/Past  = % Change
Current/Past - 1          = % Change

So really it is only the ratio of Current/Past that we care about estimating, the translating to a percent doesn’t matter. In the above equations, I am writing these as cumulative totals for the whole year. So lets do breakdowns via subscripts, and shorten Current and Past to C and P respectively. So say we have data through January, people typically estimate the YTD percent change then as:

(C_January - P_January)/P_January = % Change January

To make it easier, I am going to write e subscript for early, and l subscript for later. So if we then estimate YTD for February, we then have C_January + C_February = C_e. Also note that C_e + C_l = Current, the early observed values plus the later unobserved values equals the year totals. This identifies a clear error when people use only subsets of the data to do YTD year end projections (what both Jerry and Jeff did in their posts to argue against early YTD estimates). You should not just use P_e in your estimate, you should use the full prior year counts.

Lets go back to our year end estimate, writing in early/later form:

[C_e + C_l - (P_e + P_l)]/(P_e + P_l) = % Change

This only has one unknown in the equation – C_l, the unknown rest of year projection. You should not use (C_e - P_e)/P_e, as this introduces several stochastic elements where none are needed. P_e is not necessarily a good estimate of P_e + P_l. So lets do a simple example, imagine we had homicide totals:

     Past Current
Jan    2     1
Feb    0      
Mar    1      
Apr    1      
May    1      
Jun    1      
Jul    1      
Aug    1      
Sep    1      
Oct    1      
Nov    1      
Dec    1      
---
Tot   12

The naive way of doing YTD estimates, we would say our January YTD estimates are (1 - 2)/2 = -50%. Whereas I am saying, you should use (1 + C_l)/12 – filling in whatever value you project to the rest of the year totals C_l. Simple ones you can do in a spreadsheet are ‘no change’, just fill in the prior year which here would be C_l = 10, and would give a YTD percent change estimate of (11 - 12)/12 ~ -8%. Or another simple one is extrapolate, which would be C_l = C_e*(1/year_proportion) = 1*12, so (12 - 12)/12 = 0%. (You would really want to fit a model with seasonal and trend components and project out the remaining part of the year, which will often be somewhere between these two simpler methods.)

So far this is just theoretical “should be a better estimator” – lets show with some actual data. Python code to replicate here, but I took open data from Cary, NC, which goes back to 2000, so we have a sample of 22 years. Estimates of the error, broken down by month and version, are below. The naive estimate is how it is typically done (equivalent to Jeff/Jerry’s blog posts), the running estimate is taking prior to fill in C_l, and extrapolate is using the current months to fill in. The error metrics are | (estimated % change) - (actual year end % change) |, and the stats show the mean (standard deviation) of the sample (n=22). Here are the metrics for larceny, which average 123 per month over the sample:

       Naive   Running  Extrapolate
Jan   12 (7)    6 (4)     10 (7)
Feb    8 (6)    6 (4)     11 (7)
Mar    9 (6)    5 (3)      8 (6)
Apr    9 (7)    5 (3)      8 (5)
May    7 (6)    5 (3)      6 (4)
Jun    6 (4)    4 (3)      4 (3)
Jul    5 (3)    4 (3)      4 (3)
Aug    4 (3)    3 (2)      3 (2)
Sep    3 (2)    3 (2)      2 (2)
Oct    2 (1)    2 (1)      2 (1)
Nov    1 (1)    1 (1)      1 (1)
Dec    0 (0)    0 (0)      0 (0)

And here are the metrics for burglary, which average 28 per month over the sample. Although these have higher error metrics (due to lower/more volatile baseline counts), my estimator is still better than the naive one for the majority of the year.

       Naive   Running  Extrapolate
Jan   34 (25)   12 (8)    24 (23)
Feb   15 (14)   11 (7)    16 (13)
Mar   15 (14)   12 (7)    15 (11)
Apr   15 (11)   10 (7)    13 ( 8)
May   14 (10)   10 (7)    10 ( 7)
Jun   11 ( 8)   10 (7)     8 ( 6)
Jul    9 ( 7)    9 (7)     7 ( 5)
Aug    7 ( 5)    8 (5)     6 ( 3)
Sep    6 ( 4)    6 (5)     4 ( 3)
Oct    6 ( 4)    5 (4)     3 ( 3)
Nov    3 ( 3)    3 (3)     2 ( 2)
Dec    0 ( 0)    0 (0)     0 ( 0)

Running tends to do better for earlier in the year (and for smaller N samples). Both the running and extrapolate estimates are closer to the true year end percent change compared to the naive estimate in around 70% of the observations in this sample. (And tends to be even more pronounced in the smaller crime count categories, closer to 80% to 90% of the time better.)

In Jerry’s and Jeff’s posts, they use a metric +/- 5 to say “it is close” – this corresponds to in my tables absolute errors in the range of 5 percentage points. You meet that criteria on average in this sample for my estimator in March for Larcenies (running) and September (extrapolate) for Burglaries.

To be clear though, even with the more accurate projections, you should not use this estimate.

What do police departments want?

So Jeff may literally want an end-of-year projection for when he writes a Times article – similar to how a government might give a year end projection for GDP growth. But this is not what most police departments want when they calculate YTD metrics. So saying in turn “you shouldn’t use YTD because the error is high” to me misses the boat a bit. I can give a metric that has lower error rates, but you still shouldn’t use YTD percent change.

What police departments want to examine is the more general question “are my numbers high?” – you can further parse this into “are my numbers high consistently over the past date range” (of which the past year is just a convenient demarcation) or “are my numbers anomalous high right now”. The former is asking about long term trends, and the latter is asking about short term increases. Part of why I don’t like YTD is that it masks these two metrics – a spike early in the year can look like a perpetual long term upward trend later in the year.

I have training material showing off two different types of charts I like to use in lieu of YTD metrics. These can identify anomalous short term and long term trends. Here is an example weekly chart showing trends (in black line) and short term spikes (outside the error intervals):

So this is an uber nerd post – I hope it has general lessons though. One is that if you need to estimate Y, and you can write Y as a function of other variables, some that are variable and some that are not, e.g. Y = f(x1,c), then maybe you should just focus on estimating x1 in this scenario, not model Y directly.

In terms of more general statistical modeling of crime trends, I have debated in the past examining more thoroughly seasonal-trend decomposition techniques, but I think the examples I give above are quite sufficient for most analysis (and can be implemented in a spreadsheet).