How long to conduct your experiment: Talk at ASEBP

Upcoming at the American Society of Evidence Based Policing Conference, I have a talk Thursday morning (9:45-10:00), How long to conduct your experiment.

The talk goes over some of the simple metrics I have created to help plan how long to conduct your intervention. Such as how long to evaluate your hot spots intervention, or purchase to increase arrest rates, etc.

I have prepared a ton of different resources. The main one is a web-based application (a WASM-based app with R as the backend) where you can enter your inputs and generate a graph showing how precise your parameter estimates are:

The help page includes citations and additional materials, but here is a brief rundown:

  • I have the math details in this github repo, see the methodology.pdf. It also includes notes on how I used different LLM tools to produce the webpage and the method materials. Each of the applications allows you to download the R code used to generate the graphs and tables.

  • I have created a series of YouTube videos demonstrating the application (WDD, IRR, Proportion tests)

  • I have posted my slides for the ASEBP talk

See you all in DC at ASEBP in a few weeks!

Job Advice Resources page

Minor update, I have created a page, Job Advice Resources to cumulatively list all the materials I have written on advice for social scientists and crime analysts looking to pivot into private sector tech roles.

I still get maybe ~2 folks a month ask for advice, and I am always happy to chat. I wish PhD granting institutions took this more seriously (it only takes minor changes to better prepare students).

If you are an administrator of a PhD program and actually care about getting your students jobs, also feel free to reach out and I am happy to discuss how I can help.

Policing Scholars should join ASEBP

Cross-posted on my Crime De-Coder blog.

I will be giving a talk at the upcoming American Society of Evidence Based Policing (ASEBP) conference (registration link here, May 20th-22nd in DC). My talk is How long to conduct your experiment? Check it out Thursday morning – I specifically asked for one of the short talks; 15 minutes is plenty to get the gist.

ASEBP Conference Flyer, 2026 in DC

I will be sharing a web-app to go with the talk soon (you can see my WDD tool and this blog post for background), but wanted to write a more general post about why researchers (as well as police officers who are interested in professionalization of the field) should join ASEBP.

To start, I have been involved in various ways with ASEBP for several years now, but I do not have any financial ties to ASEBP. I currently volunteer on the committee that reviews conference talks.

ASEBP is clearly the best organization for policing scholars currently in the country. The other main criminological societies (the American Society of Criminology and the Academy of Criminal Justice Sciences) are operating much as they did 30 years ago. Mostly they only exist to run journals and have a yearly conference where anyone can give a talk. They are incredibly insular, and have basically zero input from practitioners.

You can go and just look at the talks for ASC and ACJS – they are basically irrelevant to the vast majority of criminal justice operations (not only in policing, but in the CJ field as a whole). You can go look at the talks for the ASEBP conference and see they have a much clearer focus on realistic topics police departments are interested in, but presented by legitimate researchers and practitioners.

For scholars, I have developed working relationships with departments through multiple police practitioners I have met through ASEBP – and I hope to make more!

ASEBP was started by Renee Mitchell with a clear goal in mind – Renee is really the modern-day version of August Vollmer. ASEBP is intended to be a rigorous (unlike ASC, which allows almost anyone to present) conference and organization (ASEBP has training opportunities as well) to advance the use of evidence in policing operations.

If you think “I am not a policing researcher”, but have anything to do at all with criminal justice, feel free to get in touch. (Crime analysts should definitely join.) I have ideas to expand the organization – nothing equivalent currently exists in other parts of the criminal justice system as well. Being evidence-based is really the core of what Renee and everyone else is building.

If you are going to the conference and want to meet up, feel free to send me an email, andrew.wheeler@crimede-coder.com, and I will find a time to get a coffee while we are in DC.

Interview on LEAP about LLMs for Mortals

I was recently interviewed by Jason Elder on the Law Enforcement Analysts Podcast about my new book, Large Language Models for Mortals: A Practical Guide for Analysts.

Jason does an excellent job with interviewing (and does a quality editing job with audio), so suggest to follow that if you are a crime analyst or researcher working with police departments.

Basically cover large swaths of the book, through basics of APIs, structured info extraction, some high level discussion of RAG, and how AI coding tools still need a bit of human oversight and direction. Even if you are not a coder, I think picking up a copy is a good idea to get an understanding about what is possible with the current tools.

Just to catalog the different coupon codes for the book:

  • LLMDEVS to get 50% off of the epub
  • TWOFOR1 to get $30 off when purchasing two books (can be any two books)

I do give the first coupon code for the paperback version of the book in the interview. So take a listen if interested in $20 off the paperback.

You can purchase either epub or paperback from my store worldwide.

Part time product design positions to help with AI companies

Recently on the Crime Analysis sub-reddit an individual posted about working with an AI product company developing a tool for detectives or investigators.

The Mercor platform has many opportunities that may be of interest to my network, so I am sharing them here. These include not only for investigators, but GIS analysts, writers, community health workers, etc. (The eligibility interviewers I think if you had any job in gov services would likely qualify, it is just reviewing questions.)

All are part time (minimum of 15 hours per week), remote, and can be in the US, Canada, or UK. (But cannot support H1-B or OPT visas in the US).

Additional for professionals looking to get into the tech job market, see these two resources:

I actually just hired my first employee at Crime De-Coder. Always feel free to reach out if you think you would be a good fit for the types of applications I am working on (python, GIS, crime analysis experience). I will put you in the list to reach out to when new opportunities are available.


Detectives and Criminal Investigators

Referral Link

$65-$115 hourly

Mercor is recruiting Detectives and Criminal Investigators to work on a research project for one of the world’s top AI companies. This project involves using your professional experience to design questions related to your occupation as a Detective and Criminal Investigator. Applicants must:

  • Have 4+ years full-time work experience in this occupation;
  • Be based in the US, UK, or Canada
  • minimum of 15 hours per week

Community Health Workers

Referral Link

$60-$80 hourly

Mercor is recruiting Community Health Workers to work on a research project for one of the world’s top AI companies. This project involves using your professional experience to design questions related to your occupation as a Community Health Worker. Applicants must:

  • Have 4+ years full-time work experience in this occupation;
  • Be based in the US, UK, or Canada
  • minimum 15 hours per week

Writers and Authors

Referral Link

$60-$95 hourly

Mercor is recruiting Writers and Authors to work on a research project for one of the world’s top AI companies. This project involves using your professional experience to design questions related to your occupation as a Writer and Author.

Applicants must:

  • Have 4+ years full-time work experience in this occupation;
  • Be based in the US, UK, or Canada
  • minimum 15 hours per week

Eligibility Interviewers, Government Programs

Referral Link

$60-$80 hourly

Mercor is recruiting Eligibility Interviewers, Government Programs to work on a research project for one of the world’s top AI companies. This project involves using your professional experience to design questions related to your occupation as a Eligibility Interviewers, Government Program. Applicants must:

  • Have 4+ years full-time work experience in this occupation;
  • Be based in the US, UK, or Canada
  • minimum 15 hours per week

Cartographers and Photogrammetrists

Referral Link

$60-$105 hourly

Mercor is recruiting Cartographers and Photogrammetrists to work on a research project for one of the world’s top AI companies. This project involves using your professional experience to design questions related to your occupation as a Cartographer and Photogrammetrist. Applicants must:

  • Have 4+ years full-time work experience in this occupation;
  • Be based in the US, UK, or Canada
  • minimum 15 hours per week

Geoscientists, Except Hydrologists and Geographers

$85-$100 hourly

Referral Link

Mercor is recruiting Geoscientists, Except Hydrologists and Geographers to work on a research project for one of the world’s top AI companies. This project involves using your professional experience to design questions related to your occupation as a Geoscientists, Except Hydrologists and Geographers Applicants must:

  • Have 4+ years full-time work experience in this occupation;
  • Be based in the US, UK, or Canada
  • minimum of 15 hours per week

Year in Review 2025 and AI Predictions

For a brief year in review, total views for the two different websites have decreased in the past year. For this blog, I am going to be a few thousand shy of 100,000 views. (2023 I had over 150k views, and 2024 I had over 140k views.) For the Crime De-Coder site, I am going to only get around 15k views.

Part of it is I posted less, this will be the 21st blog post this year on the personal blog (2023 had 46 and 2024 had 32 posts). The Crime De-Coder site had 12 blog posts, so pretty consistent with the prior year. Both are pretty bursty, with large bouts of traffic coming from if I post something to Hacker News I can get 1k to 10k views in a day or two if it makes it to the front page. So the 2024 stats for the crime de-coder was a few of those Hacker News bumps I did not get in 2025.

Some of it could legitimately be traditional Google search being usurped by the gen AI tools. This is the first year I had appreciable referrals from chatgpt, but they are less than 1000. The other tools are trivial amount of referrals. If I worried about SEO more, I would have more updating/regular content (as old pages are devalued quite a bit by google, and it seems to be getting more severe over time).

I have upped my use of the free tools quite a bit. ChatGPT knows me pretty well, and I use Claude Desktop almost every day as well.

An IAM policy scroll is more of a nightmare, and I definitely ask more python questions than R, but the cartoon desk is pretty close to spot on. I am close to paying for Anthropic subscription for Claude code credits (currently use pay as I go via Bedrock, and this is the first month I went over $20).

What pages on the blog are popular I can never be sure of. My most popular post last year was Downloading Police Employment Trends from the FBI Data Explorer. A 2023 post, that had random times where it would have several hundred visits in a short hour span. (Some bot collecting sites? I do not know.) If it is actual people, you would want to check out my Sworn Dashboard site, where you can look at trends for PDs much easier than downloading all the data yourself!

One thing that has grown though, I do short form posting on LinkedIn on my crime de-coder page. Impressions total for the year is over 340k (see the graph), and I currently am a few shy of 4400 followers.

LinkedIn is nice because it can be slightly longer form than the other social media sites. I would suggest you follow me there (in addition to signing up for RSS feeds for the two sites). That is the easiest way to follow my work.

I also took over as a moderator of the Crime Analysis Reddit forum, it is better than the IACA forums in my opinion, so encourage folks to post there for crime analysis questions.

Crime De-Coder Work

Crime De-Coder work has been steady (but not increasing). Similar to last year had several consulting gigs conducting crime analysis for premises liability cases (and one other case I may share my opinions once it is over), and doing some small projects with non-profits and police departments.

One big project was a python training in Austin.

The Python Book (which I also translated to Spanish/French), had a trickle of new sales. 2024 had around 100 sales and 2025 had around 50 sales. It is close to 2/3 print sales and 1/3 epub, so definately folks should have physical prints if you are selling books still.

Doing trainings basically makes writing the book worth it, but I do hope eventually the book makes it way into grad school curriculum’s. (Only one course so far.) I have pitched to grad schools to have me run a similar bootcamp to what I do for crime analysts, so if interested let me know.

The biggest new thing was Crime De-Coder got an Arnold Grant. Working with Denver PD on an experiment to evaluate a chronic offender initiative.

At the Day Gig

At my day gig, I was officially promoted to a senior manager and then quickly to a director position. Hence you get posts like what to show in your tech resume and notes on project management.

One of the reasons I am big on python – it is the dominant programming language in data science. It is hard for me to recruit from my network, as majority of individuals just know a little R (if you were a hard core R person, had packages/well executed public repo’s, I could more easily think you will be able to migrate to python to work on my team).

So learn python if you want to be a data scientist is my advice (and see other job market advice at my archived newsletter).

AI Predictions

At the day gig, my work went from 100% traditional supervised machine learning models to more like 50/50 traditional vs generative AI applications. The genAI hype is real, but I think it is worthwhile putting my thoughts to paper.

The biggest question is will AI take all of our jobs? I think a more likely end scenario is the AI tools just become better at helping humans do tasks. The leap from helping a human do something faster vs an AI tool doing it 100% on its own with 0 human input is hard. The models are getting incrementally better, but I think to fully replace people in a substantive way will require another big advancement in fundamental capabilities. Making a human 10x more productive is easier and still will make the AI companies a ton of money.

Sometimes people view the 10x idea and say that will take jobs, just not 100% of jobs. That is a view though that there is only a finite amount of work to be done. That assumption is clearly not true, and being able to do work faster/cheaper just induces demand for more potential work. The example with calculators making more banking jobs, not less, is basically the same example.

One of the critiques of the current systems is they are overvalued, so we are in a bubble. I do not remember where I read it, but one estimate was if everyone in the US spent $1 a day on the different AI tools, that would justify the current valuations for OpenAI, Anthropic, NVIDIA, etc. I think that is totally doable, we spend a few thousand a workday at Gainwell on the foundation models for example for a few projects, and we are just going to continue to roll out more and more. Gainwell is a company with around 6k employees for reference, and our current AI applications touch way less than 1k of those employees. We have plenty of room to grow those applications.

It is super hard though to build systems to help people do things faster. And we are talking like “this thing that used to take 30 minutes now takes 15 minutes”. If you have 100 people doing that thing all the time though, the costs of the models are low enough it is an easy win.

And this mostly only holds true for knowledge economy work that can be all done via software. There just still needs to be fundamental improvements to robotics to be able to do physical things. The tailor’s job is safe for the foreseeable future.

The change in the data science landscape to more generative AI applications definitely requires social scientists and analysts to up their game though to learn a new set of tools. I do have another book in the works to address that, so hopefully you will see that early next year.

Advice for crime analyst to break into data science

I recently received a question about a crime analyst looking to break into data science. Figured it would be a good topic for my advice in a blog post. I have written many resources over the years targeting recent PhDs, but the advice for crime analysts is not all that different. You need to pick up some programming, and likely some more advanced tech skills.

For background, the individual had SQL + Excel skills (which many analysts may just have Excel). Vast majority of analyst roles, you should be quite adept at SQL. But just SQL is not sufficient for even an entry level data science role.


For entry data science, you will need to demonstrate competency in at least one programming language. The majority of positions will want you to have python skills. (I wrote an entry level python book exactly for someone in your position.)

You likely will also need to demonstrate competency in some machine learning or using large language models for data science roles. It used to be Andrew Ng’s courses were the best recommendation (I see he has a spin off DeepLearningAI now). So that is second hand though, I have not personally taken them. LLMs are more popular now, so prioritizing learning how to call those APIs, build RAG systems, prompt engineering I think is going to make you slightly more marketable than traditional machine learning.

I have personally never hired anyone in a data science role without a masters. That said, I would not have a problem if you had a good portfolio. (Nice website, Github contributions, etc.)

You should likely start just looking and applying to “analyst” roles now. Don’t worry about if they ask for programming you do not have experience in, just apply. Many roles the posting is clearly wrong or totally unrealistic expectations.

Larger companies, analyst roles can have a better career ladder, so you may just decide to stay in that role. If not, can continue additional learning opportunities to pursue a data science career.

Remote is more difficult than in person, but I would start by identifying companies that are crime analysis adjacent (Lexis Nexis, ESRI, Axon) and start applying to current open analyst positions.

For additional resources I have written over the years:

The alt-ac newsletter has various programming and job search tips. THe 2023 blog post goes through different positions (if you want, it may be easier to break into project management than data science, you have a good background to get senior analyst positions though), and the 2025 blog post goes over how to have a portfolio of work.

Cover page, data science for crime analysis with python

I scraped the Crime solutions site

Before I get to the main gist, I am going to talk about another site. The National Institute of Justice (NIJ) paid RTI over $10 million dollars to develop a forensic technology center of excellence over the past 5 years. While this effort involved more than just a website, the only thing that lives in perpetuity for others to learn from the center of excellence are the resources they provide on the website.

Once funding was pulled, this is what RTI did with those resources:

The website is not even up anymore (it is probably a good domain to snatch up if no one owns it anymore), but you can see what it looked like on the internet archive. It likely had over 1000+ videos and pages of material.

I have many friends at RTI. It is hard for me to articulate how distasteful I find this. I understand RTI is upset with the federal government cuts, but to just simply leave the website up is a minimal cost (and likely worth it to RTI just for the SEO links to other RTI work).

Imagine you paid someone $1 million dollars for something. They build it, and then later say “for $1 million more, I can do more”. You say ok, then after you have dispersed $500,000 you say “I am not going to spend more”. In response, the creator destroys all the material. This is what RTI did, except it was they had been paid $11 million and they were still to be paid another $1 million. Going forward, if anyone from NIJ is listening, government contracts to build external resources should be licensed in a way that prevents that from happening.

And this brings me to the current topic, CrimeSolutions.gov. It is a bit of a different scenario, as NIJ controls this website. But recently they cut funding to the program, which was administered by DSG.

Crime Solutions is a website where they have collected independent ratings of research on criminal justice topics. To date they have something like 800 ratings on the website. I have participated in quite a few, and I think these are high quality.

To prevent someone (for whatever reason) simply turning off the lights, I scraped the site and posted the results to github. It is a PHP site under the hood, but changing everything to run as a static HTML site did not work out too badly.

So for now, you can view the material at the original website. But if that goes down, you have a close to same functional site mirrored at https://apwheele.github.io/crime-solutions/index.html. So at least those 800 some reviews will not be lost.

What is the long term solution? I could be a butthead and tomorrow take down my github page (so clone it locally), so me scraping the site is not really a solution as much as a stopgap.

Ultimately we want a long term, public, storage solution that is not controlled by a single actor. The best solution we have now is ArDrive via the folks from Arweave. For a one time upfront purchase, Arweave guarantees the data will last a minimum of 200 years (they fund an endowment to continually pay for upkeep and storage costs). If you want to learn more, stay tuned, as me and Scott Jacques are working on migrating much of the CrimRXiv and CrimConsortium work to this more permanent solution.

Deep research and open access

Most of the major LLM chatbot vendors are now offering a tool called deep research. These tools basically just scour the web given a question, and return a report. For academics conducting literature reviews, the parallel is obvious. We just tend to limit the review to peer reviewed research.

I started with testing out Google’s Gemini service. Using that, I noticed almost all of the sources cited were public materials. So I did a little test with a few prompts across the different tools. Below are some examples of those:

  • Google Gemini question on measuring stress in police officers (PDF, I cannot share this chat link it appears)
  • OpenAI Effectiveness of Gunshot detection (PDF, link to chat)
  • Perplexity convenience sample (PDF, Perplexity was one conversation)
  • Perplexity survey measures attitudes towards police (PDF, see chat link above)

The report on officer mental health measures was an area I was wholly unfamiliar. The other tests are areas where I am quite familiar, so I could evaluate how well I thought each tool did. OpenAI’s tool is the most irksome to work with, citations work out of the box for Google and Perplexity, but not with ChatGPT. I had to ask it to reformat things several times. Claude’s tool has no test here, as to use its deep research tool you need a paid account.

Offhand each of the tools did a passable job of reviewing the literature and writing reasonable summaries. I could nitpick things in both the Perplexity and the ChatGPT results, but overall they are good tools I would recommend people become familiar with. ChatGPT was more concise and more on-point. Perplexity got the right answer for the convenience sample question (use post-stratification), but also pulled in a large literature on propensity score matching (which is only relevant for X causes Y type questions, not overall distribution of Y). Again this is nit-picking for less than 5 minutes of work.

Overall these will not magically take over writing your literature review, but are useful (the same way that doing simpler searches in google scholar is useful). The issue with hallucinating citations is mostly solved (see the exception for ChatGPT here). You should consult the original sources and treat deep research reports like on-demand Wikipedia pages, but lets not kid ourselves – most people will not be that thorough.

For the Gemini report on officer mental health, I went through quickly and broke down the 77 citations across the publication type or whether the sources were in HTML or PDF. (Likely some errors here, I went by the text for the most part.) For the HTML vs PDF, 59 out of 77 (76%) are HTML web-sources. Here is the breakdown for my ad-hoc categories for types of publications:

  • Peer Review (open) – 39 (50%)
  • Peer review (just abstract) 10 (13% – these are all ResearchGate)
  • Open Reports 23 (30%)
  • Web pages 5 (6%)

For a quick rundown of these. Peer reviewed should be obvious, but sometimes the different tools cite papers that are not open access. In these cases, they are just using the abstract to madlib how Deep Research fills in its report. (I consider ResearchGate articles here as just abstract, they are a mix of really available, but you need to click a link to get to the PDF in those cases. Google is not indexing those PDFs behind a wall, but the abstract.) Open reports I reserve for think tank or other government groups. Web pages I reserve for blogs or private sector white papers.

I’d note as well that even though it does cite many peer review here, many of these are quite low quality (stuff in MDPI, or other what look to me pay to publish locations). Basically none of the citations are in major criminology journals! As I am not as familiar with this area this may be reasonable though, I don’t know if this material is often in different policing journals or Criminal Justice and Behavior and just not being picked up at all, or if that lit in those places just does not exist. I have a feeling it is missing a few of the traditional crim journal sources though (and picks up a few sources in different languages).

The OpenAI report largely hallucinated references in the final report it built (something that Gemini and Perplexity currently do not do). The references it made up were often portmanteaus of different papers. Of the 12 references it provided, 3 were supposedly peer reviewed articles. You can in the ChatGPT chat go and see the actual web-sources it used (actual links, not hallucinated). Of the 32 web links, here is the breakdown:

  • Pubmed 9
  • The Trace 5
  • Kansas City local news station website 4
  • Eric Piza’s wordpress website 3
  • Govtech website 3
  • NIJ 2

There are single links then to two different journals, and one to the Police Chief magazine. I’d note Eric’s site is not that old (first RSS feed started in February 2023), so Eric making a website where he simple shares his peer reviewed work greatly increased his exposure. His webpage in ChatGPT is more influential than NIJ and peer reviewed CJ journals combined.

I did not do the work to go through the Perplexity citations. But in large part they appear to me quite similar to Gemini on their face. They do cite pure PDF documents more often than I expected, but still we are talking about 24% in the Gemini example are PDFs.

The long story short advice here is that you should post your preprints or postprints publicly, preferably in HTML format. For criminologists, you should do this currently on CrimRXiv. In addition to this, just make a free webpage and post overviews of your work.

These tests were just simple prompts as well. I bet you could steer the tool to give better sources with some additional prompting, like “look at this specific journal”. (Design idea if anyone from Perplexity is listening, allow someone to be able to whitelist sources to specific domains.)


Other random pro-tip for using Gemini chats. They do not print well, and if they have quite a bit of markdown and/or mathematics, they do not convert to a google document very well. What I did in those circumstances was to do a bit of javascript hacking. So go into your dev console (in Chrome right click on the page and select “Inspect”, then in the new page that opens up go to the “Console” tab). And depending on the chat browser currently opened, can try entering this javascript:

// Example printing out Google Gemini Chat
var res = document.getElementsByTagName("extended-response-panel")[0];
var report = res.getElementsByTagName("message-content")[0];
var body = document.getElementsByTagName("body")[0];
let escapeHTMLPolicy = trustedTypes.createPolicy("escapeHTML", {
 createHTML: (string) => string
});
body.innerHTML = escapeHTMLPolicy.createHTML(report.innerHTML);
// Now you can go back to page, cannot scroll but
// Ctrl+P prints out nicely

Or this works for me when revisiting the page:

var report = document.getElementById("extended-response-message-content");
var body = document.getElementsByTagName("body")[0];
let escapeHTMLPolicy = trustedTypes.createPolicy("escapeHTML", {
 createHTML: (string) => string
});
body.innerHTML = escapeHTMLPolicy.createHTML(report.innerHTML);

This page scrolling does not work, but Ctrl + P to print the page does.

The idea behind this, I want to just get the report content, which ends up being hidden away in a mess of div tags, promoted out to the body of the page. This will likely break in the near future as well, but you just need to figure out the correct way to get the report content.

Here is an example of using Gemini’s Deep Research to help me make a practice study guide for my sons calculus course as an example.

The difference between models, drive-time vs fatality edition

Easily one of the most common critiques I make when reviewing peer reviewed papers is the concept, the difference between statistically significant and not statistically significant is not itself statistically significant (Gelman & Stern, 2006).

If you cannot parse that sentence, the idea is simple to illustrate. Imagine you have two models:

Model     Coef  (SE)  p-value
  A        0.5  0.2     0.01
  B        0.3  0.2     0.13

So often social scientists will say “well, the effect in model B is different” and then post-hoc make up some reason why the effect in Model B is different than Model A. This is a waste of time, as comparing the effects directly, they are quite similar. We have an estimate of their difference (assuming 0 covariance between the effects), as

Effect difference = 0.5 - 0.3 = 0.2
SE of effect difference = sqrt(0.2^2 + 0.2^2) = 0.28

So when you compare the models directly (which is probably what you want to do when you are describing comparisons between your work and prior work), this is a bit of a nothing burger. It does not matter that Model B is not statistically significant, a coefficient of 0.3 is totally consistent with the prior work given the standard errors of both models.

Reminded again about this concept, as Arredondo et al. (2025) do a replication of my paper with Gio on drive time fatalities and driving distance (Circo & Wheeler, 2021). They find that distance (whether Euclidean or drive time) is not statistically significant in their models. Here is the abstract:

Gunshot fatality rates vary considerably between cities with Baltimore, Maryland experiencing the highest rate in the U.S.. Previous research suggests that proximity to trauma care influences such survival rates. Using binomial logistic regression models, we assessed whether proximity to trauma centers impacted the survivability of gunshot wound victims in Baltimore for the years 2015-2019, considering three types of distance measurements: Euclidean, driving distance, and driving time. Distance to a hospital was not found to be statistically associated with survivability, regardless of measure. These results reinforce previous findings on Baltimore’s anomalous gunshot survivability and indicate broader social forces’ influence on outcomes.

This ends up being a clear example of the error I describe above. To make it simple, here is a comparison between their effects and the effects in my and Gio’s paper (in the format Coef (SE)):

Paper     Euclid          Network       Drive Time
Philly     0.042 (0.021)  0.030 (0.016)    0.022 (0.010)
Baltimore  0.034 (0.022)  0.032 (0.020)    0.013 (0.006)

At least for these coefficients, there is literally nothing anomalous at all compared to the work me and Gio did in Philadelphia.

To translate these coefficients to something meaningful, Gio and I estimate marginal effects – basically a reduction of 2 minutes results in a decrease of 1 percentage point in the probability of death. So if you compare someone who is shot 10 minutes from the hospital and has a 20% chance of death, if you could wave a wand and get them to the ER 2 minutes faster, we would guess their probability of death goes down to 19%. Tiny, but over many such cases makes a difference.

I went through some power analysis simulations in the past for a paper comparing longer drive time distances as well (Sierra-Arévalo et al. 2022). So the (very minor) differences could also be due to omitted variable bias (in logit models, even if not confounded with the other X, can bias towards 0). The Baltimore paper does not include where a person was shot, which was easily the most important factor in my research for the Philly work.

To wrap up – we as researchers cannot really change broader social forces (nor can we likely change the location of level 1 emergency rooms). What we can change however are different methods to get gun shot victims to the ER faster. These include things like scoop-and-run (Winter et al., 2022), or even gun shot detection tech to get people to scenes faster (Piza et al., 2023).

References