Youtube interview with Manny San Pedro on Crime Analysis and Data Science

I recently did an interview with Manny San Pedro on his YouTube channel, All About Analysis. We discuss various data science projects I conducted while either working as an analyst, or in a researcher/collaborator capacity with different police departments:

Here is an annotated breakdown of the discussion, as well as links to various resources I discuss in the interview. This is not a replacement for listening to the video, but is an easier set of notes to link to more material on what particular item I am discussing.

0:00 – 1:40, Intro

For rundown of my career, went to do PhD in Albany (08-15). During that time period I worked as a crime analyst at Troy, NY, as well as a research analyst for my advisor (Rob Worden) at the Finn Institute. My research focused on quant projects with police departments (predictive modeling and operations research). In 2019 went to the private sector, and now work as an end-to-end data scientist in the healthcare sector working with insurance claims.

You can check out my academic and my data science CV on my about page.

I discuss the workshop I did at the IACA conference in 2017 on temporal analysis in Excel.

Long story short, don’t use percent change, use other metrics and line graphs.

7:30 – 13:10, Patrol Beat Optimization

I have the paper and code available to replicate my work with Carrollton PD on patrol beat optimization with workload equality constraints.

For analysts looking to teach themselves linear programming, I suggest Hillier’s book. I also give examples on linear programming on this blog.

It is different than statistical analysis, but I believe has as much applicability to crime analysis as your more typical statistical analysis.

13:10 – 14:15, Million Dollar Hotspots

There are hotspots of crime that are so concentrated, the expected labor cost reduction in having officers assigned full time likely offsets the position. E.g. if you spend a million dollars in labor addressing crime at that location, and having a full time officer reduces crime by 20%, the return on investment for hotspots breaks even with paying the officers salary.

I call these Million dollar hotspots.

14:15 – 28:25, Prioritizing individuals in a group violence intervention

Here I discuss my work on social network algorithms to prioritize individuals to spread the message in a focussed deterrence intervention. This is opposite how many people view “spreading” in a network, I identify something good I want to spread, and seed the network in a way to optimize that spread:

I also have a primer on SNA, which discusses how crime analysts typically define nodes and edges using administrative data.

Listen to the interview as I discuss more general advice – in SNA it matters what you want to accomplish in the end as to how you would define the network. So I discuss how you may want to define edges via victimization to prevent retaliatory violence (I think that would make sense for violence interupptors to be proactive for example).

I also give an example of how detective case allocation may make sense to base on SNA – detectives have background with an individuals network (e.g. have a rapport with a family based on prior cases worked).

28:25 – 33:15, Be proactive as an analyst and learn to code

Here Manny asked the question of how do analysts prevent their role being turned into more administrative role (just get requests and run simple reports). I think the solution to this (not just in crime analysis, but also being an analyst in the private sector) is to be proactive. You shouldn’t wait for someone to ask you for specific information, you need to be defining your own role and conducting analysis on your own.

He also asked about crime analysis being under-used in policing. I think being stronger at computer coding opens up so many opportunities that learning python, R, SQL, is the area I would like to see stronger skills across the industry. And this is a good career investment as it translates to private sector roles.

33:15 – 37:00, How ChatGPT can be used by crime analysts

I discuss how ChatGPT may be used by crime analysis to summarize qualitative incident data and help inform . (Check out this example by Andreas Varotsis for an example.)

To be clear, I think this is possible, but the tech I don’t think is quite up to that standard yet. Also do not submit LEO sensitive data to OpenAI!

Also always feel free to reach out if you want to nerd out on similar crime analysis questions!

Setting conda environments in crontab

I prefer using conda environments to manage python (partly out of familiarity). Conda is a bit different though, in that it is often set up locally for a users environment, and not globally as an installed package. This makes using it in bash scripts (or on windows .bat files) somewhat tricky.

So first, in a Unix environment, you can choose where to install conda. Then it adds into your .bashrc profile a line that looks something like:

__conda_setup="$('/mnt/miniconda/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
    eval "$__conda_setup"
else
    if [ -f "/lib/miniconda/etc/profile.d/conda.sh" ]; then
        . "/lib/miniconda/etc/profile.d/conda.sh"
    else
        export PATH="/lib/miniconda/bin:$PATH"
    fi
fi
unset __conda_setup

Where here I installed it in /lib. This looks complicated at first glance, but really all it is doing is sourcing the conda.sh script and pre-pending miniconda/bin to the path.

Now to be able to run python code on a regular basis in crontab, I typically have crontab run shell scripts, not python directly, so say that is a file run_code.sh:

#!/bin/sh

# Example shell script
time_start=`date +%Y_%m_%d_%H:%M`
echo "Begin script $time_start"

# Sourcing conda
source /lib/miniconda/etc/profile.d/conda.sh

# activating your particular environment
# may need to give full path, not just the name
conda activate your_env

# if you want to check environment
python --version

# you may need to change the directory at this point
echo "Current Directory is set to $PWD"
cd ...

# run your python script
log_file="main_log_$time_start.txt"
python main.py > $log_file 2>&1

I do not need to additionally add to the path in my experience, just sourcing that script is sufficient. Now edit your crontab (via crontab -e and using the VI editor) to look something like:

20 3 * * * bash /.../run_code.sh >> /.../cron_log.txt 2>&1

Where /.../ is shorthand for an explicit path to where the shell script and cron log lives.

This will run the shell script at 3:20 AM and append all of the stuff. In crontab if you just want conda available for all jobs, I believe you could do something like:

# global environment, can set keys, run scripts
some_key=abc
export some_key
source /lib/miniconda/etc/profile.d/conda.sh

20 3 * * * bash /.../run_code.sh >> /.../cron_log.txt 2>&1

But I have not tested this. If this works, you could technically run python scripts directly, but if you need to change environments you would still really need a shell script. It is good to know to be able to inject environment variables though in the crontab environment.

About the only other gotcha is file permissions. Sometimes in business applications you have service accounts running things, so a crontab as the service account. And you just need to make sure to chmod files so the service account has appropriate permissions. I tend to have more issues with log files by accident than I do conda environments though.

Note for people setting up scheduled jobs on windows, I have an example of setting a conda environment in a windows bat file.

Additional random pro-tip with conda environments while I am here – if you by default don’t want conda to set up new environments in your home directory (due to space or production processes), as well as download packages into a different cache location, you can do something like:

conda config --add pkgs_dirs /lib/py_packages
conda config --add envs_dirs /lib/conda_env

Have had issues in the past of having too much junk in home.

Make more money

So I enjoy Ramit Sethi’s Netflix series on money management – fundamentally it is about money coming in and money going out and the ability to balance a budget. On occasion I see other budget coaches focus on trivial expenses (the money going out) whereas for me (and I suspect the majority of folks reading this blog with higher degrees and technical backgrounds) you should almost always be focused on finding a higher paying job.

Lets go with a common example people use as unnecessary discretionary spending – getting a $10 drink at Starbucks every day. If you do this, over the course of a 365 day year, you will have spent $3650 additional dollars. If you read my blog about coding and statistics and that expense bothers you, you are probably not making as much money as you should be.

Ramit regularly talks about asking for raises – I am guessing most people reading this blog if you got a raise it would be well over that Starbucks expense. But part of the motivation to write this post is in reference to formerly being a professor. I think many criminal justice (CJ) professors are underemployed, and should consider better paying jobs. I am regularly starting to see public sector jobs in CJ that have substantially better pay than being a professor. This morning was shared a position for an entry level crime analyst at the Reno Police Department with pay range from $84,000 to $102,000:

The low end of that starting pay range is competitive with the majority of starting assistant professor salaries in CJ. You can go check out what the CJ professors at Reno make (which is pretty par for the course for CJ departments in the US) in comparison. If I had stayed as a CJ professor, even with moving from Dallas to other universities and trying to negotiate raises, I would be lucky to be making over $100k at this point in time. Again, that Reno position is an entry level crime analyst – asking for a BA + 2 years of experience or a Masters degree.

Private sector data science jobs in comparison, in DFW area in 2019 entry level were often starting at $105k salary (based on personal experience). You can check out BLS data to examine average salaries in data science if you want to look at your particular metro area (it is good to see the total number in that category in an area as well).

While academic CJ salaries can sometimes be very high (over $200k), these are quite rare. There are a few things going against professor jobs, and CJ ones in particular, that depress CJ professor wages overall. Social scientists in general make less than STEM fields, and CJ departments are almost entirely in state schools that tend to have wage compression. Getting an offer at Harvard or Duke is probably not in the cards if you have a CJ degree.

In addition to this, with the increase in the number of PhDs being granted, competition is stiff. There are many qualified PhDs, making it very difficult to negotiate your salary as an early career professor – the university could hire 5 people who are just as qualified in your stead who aren’t asking for that raise.

So even if you are lucky enough to have negotiating power to ask for a raise as a CJ professor (which most people don’t have), you often could make more money by getting a public sector CJ job anyway. If you have quant skills, you can definitely make more money in the private sector.

At this point, most people go back to the idea that being a professor is the ultimate job in terms of freedom. Yes, you can pursue whatever research line you want, but you still need to teach courses, supervise students, and occasionally do service to the university. These responsibilities all by themselves are a job (the entry level crime analyst at Reno will work less overall than the assistant professor who needs to hustle to make tenure).

To me the trade off in freedom is worth it because you get to work directly with individuals who actually care what you do – you lose freedom because you need to make things within the constraints of the real world that real people will use. To me being able to work directly on real problems and implement my work in real life is a positive, not a negative.

Final point to make in this blog, because of the stiff competition for professor positions, I often see people suggesting there are too many PhDs. I don’t think this is the case though, you can apply the skills you learned in getting your CJ PhD to those public and private sector jobs. I think CJ PhD programs just need small tweaks to better prepare students for those roles, in addition to just letting people know different types of positions are available.

It is pretty much at the point that alt-academic jobs are better careers than the majority of CJ academic professor positions. If you had the choice to be an assistant professor in CJ at University of Nevada Reno, or be a crime analyst at Reno PD, the crime analyst is the better choice.

Some adventures in cloud computing

Recently I have been trying to teach myself a bit of cloud architecture – it has not been going well. The zoo of micro-services available from AWS or Google is testing. Most recent experiment with Google, I had some trial money and spun up the cheapest Postgres database, created a trivial table, added a few rows, and then left it for a month. It racked up nearly $200 of bills in that time span. In addition the only way I could figure out how to interact with the DB was some hacky sqlalchemy python code from my local system (besides the cloud shell psql).

But I have been testing other services that are easier for me to see how I can use them for my business. This post will mostly be about supabase (note I am not paid for this!). Alt title for the post supabase is super easy. Supabase is a cloud postgres database, and out of the box it is set up to make hitting API endpoints very simple. Free tier database can hold 500mb (and get/post calls I believe are unlimited). Their beta pricing for smaller projects can up the postgres DB to 8 gigs (at $25 per month per project). This pricing makes me feel much safer than the cloud stuff – where I am constantly concerned I will accidentally leave something turned on and rack up 4 or 5 digits of expenses.

Unlike the google cloud database, I was able to figure supabase out in a day. So first after creating a project, I created a table to test out:

-- SQL Code
create table
  public.test_simple (
    id bigint generated by default as identity not null,
    created_at timestamp with time zone null default now(),
    vinfo bigint null,
    constraint test_simple_pkey primary key (id)
  ) tablespace pg_default;

I actually created this in the GUI editor. Once you create a table, it has documentation on how to call the API in the top right:

If you don’t speak curl (it also has javascript examples), you can convert curl to python:

# Python code
import requests

sup_row_key = '??yourpublickey??'

headers = {
    'apikey': sup_row_key,
    'Authorization': f'Bearer {sup_row_key}',
    'Range': '0-9',
}

# Where filter
response = requests.get('https://ytagtevlkzgftkgwhsfv.supabase.co/rest/v1/test_simple?id=eq.1&select=*', headers=headers)

# Getting all rows
response = requests.get('https://ytagtevlkzgftkgwhsfv.supabase.co/rest/v1/test_simple?select=*', headers=headers)

When creating a project, you by default get a public key with read access, and a private key that has write. But you can see the nature of the endpoint is quite simple, you just can’t copy paste the link due to needing to pass headers is all.

One example I was thinking about was more on-demand webscraping/geocoding. So as a way to be nice to different people you are scraping data from, you can call them once, and cache the results. Now back in Supabase, to do this I enabled the plv8 database extension to be able to define javascript functions. Here is the SQL I used to create a Postgres function:

-- SQL Code
create or replace function public.test_memoize(mid int)
returns setof public.test_simple as $
    
    // This is javascript
    var json_result = plv8.execute(
        'select * from public.test_simple WHERE id = $1',
        [mid]
    );
    if (json_result.length > 0) {
        return json_result;
    } else {
        // here just an example, you would use your own function
        var nv = mid + 2;
        var res_ins = plv8.execute(
          'INSERT INTO public.test_simple VALUES ($1,DEFAULT,$2)',
          [mid,nv]
        );
        // not really necessary to do a 2nd get call
        // could just pass the results back, ensures
        // result is formatted the same way though
        var js2 = plv8.execute(
        'select * from public.test_simple WHERE id = $1',
        [mid]);
        return js2;
     }

$ language plv8;

This is essentially memoizing a function, just using a database backend to cache the call. So it looks to see if you pass in a value if it exists, if not, do something with the result (here just add 2 to the input), insert the result into the DB, and then return the result.

Now to call this function from a web-endpoint, we need to post the values to the rpc endpoint:

# Python post to supabase function
json_data = {'mid': 20}

response = requests.post('https://ytagtevlkzgftkgwhsfv.supabase.co/rest/v1/rpc/test_memoize', headers=headers, json=json_data)

This type of memoization is good if you have expensive functions, but not all that varied of input (but can’t upfront make a batch lookup table).

Supabase also has the ability to do edge functions (server side typescript). That may be a better case for this, but very nice to be able to make a quick function and test it out.

Next up in the blog when I get a chance, I have also been experimenting with Oracle Cloud free tier. I haven’t been able to figure out the database stuff on their platform yet, but you can spin up a nice little persistent virtual machine (with 1 gig of ram). Very nice for tiny batch jobs, and next blog post will be setting up conda and showing how to do cron jobs. Batch scraping slow but smaller data jobs I am thinking is a good use case. (And having a persistent machine is nice, for the same reason having your car is nice even if you don’t use it all day every day.)

One thing I am still searching for, if I have more data intensive batch jobs – like I need to do more data intensive processing with more RAM (I often don’t need GPUs, but having more RAM is nice), what is my best cloud solution? So no Github actions (can be long running), but need more RAM than the cheap VPS. I am not even sure the correct comparable products in the big companies.

Let me know in the comments if you have any suggestions! Just knowing where to get started is sometimes very difficult.

Javascript apps and ASEBP update

So for a quick update, my most recent post on ASEBP, This One Simple Trick Will Improve Attitudes Toward Police. (Note you need a ASEBP membership to read.) There are several recent studies by different groups showing follow up to victims, even if you won’t solve the crime in the end, improves overall attitudes towards police. Simple thing for PDs to do. See the reference list at the end of the post for various studies.

Besides that, no blog posts here recently as I have been working on my CRIME De-Coder site, in particular developing a few additional javascript demo’s. My most recent one is a social network app applying my dominant set algorithm (to prioritize call-ins in a group violence/focused deterrence intervention) (Wheeler et al., 2019).

The javascript apps are very nice, as they are all client side – my website just serves the text files, and your local browser does all the hard work. I don’t need to worry about dealing with LEO sensitive data in that scenario either.

I am still learning a ton of website development (will have some surveys deployed using PHP + google sheets here soonish on CRIME De-Coder). Debate on whether it is worth writing up blog posts here. The javascript network application is almost a 1:1 translate of my python code. Vectorized stuff I don’t know much about doing in javascript, but the network algorithm stuff is mostly just dictionaries, sets, and loops. If interested, you can just right click on the browser when the page is open and inspect the source.

References

  • Clark, B., Ariel, B., & Harinam, V. (2022). How Should the Police Let Victims Down? The Impact of Reassurance Call-Backs by Local Police Officers to Victims of Vehicle and Cycle Crimes: A Block Randomized Controlled Trial. Police Quarterly, Online First.
  • Curtis-Ham, S., & Cantal, C. (2022). Locks, lights, and lines of sight: an RCT evaluating the impact of a CPTED intervention on repeat burglary victimisation. Journal of Experimental Criminology, Online First.
  • Henning, Kris et al. 2023. The Impact of Online Crime Reporting on Community Trust, Police Chief Online, April 12, 2023
  • Wheeler, A. P., McLean, S. J., Becker, K. J., & Worden, R. E. (2019). Choosing representatives to deliver the message in a group violence intervention. Justice Evaluation Journal, 2(2), 93-117.

Dashboards are often not worth the effort

When end users see dashboards, they often think “this is really whiz-bang cool, lets do that for our data”. There are two issues though I commonly see with dashboards. One is the nature of the task to be accomplished with the dashboard is not well defined, and so even a visually well done dashboard mostly goes unused. The second, there are a ton of headaches deploying dashboards with real data – and the effort to do it right is not worth it.

For the first part, consider a small agency that has a simple crime trends dashboard. The intent is to identify anomalous upticks of crime. This requires someone log into the dashboard, click around the different crime trends, and visually seeing if they are higher than expected. It would be easier to either have an automated alert when some threshold is met, or a standardized report (e.g. once a week) that is emailed to review.

This post is not going to even be about ‘most dashboards show stupid data’ or ‘the charts show the data in totally inappropriate ways’. Even in cases in which you can build a nice dashboard, the complexity level is IMO not worth it in many situations I have encountered. Automating alerts and building regular standard reports for the vast majority of situations is in my opinion a better solution for data products. The whiz-bang being able to interactively click stuff will only be used by a very small number of people (who can often build stuff like that for themselves anyway).

So onto the second part, deploying dashboards with real data that others can interact with. Many of the cool example dashboards you see online do tricks that would be inappropriate for a production dashboard. So even if someone is like ‘yeah I can do a demo dashboard in a day on my laptop’, there is still a ton of more work to expose that dashboard to different individuals, which is probably the ultimate goal.

Dashboards have two parts: A) connecting to a source database, then B) exposing the user-interface to outside parties. Seems simple right? Wrong.

Lets start with part A, connecting to a source database. Many dashboard examples you see online cheat at this stage – they embed a static version of the data in the dashboard itself. If Gary needs to re-upload the data to the dashboard every week, what happens when Gary goes on vacation or takes a day off? You now have an out of date dashboard.

It is quite hard to manage connections between live source data and a dashboard. In the case you have a public facing dashboard, I would just host a public file somewhere on the internet (so anyone can download the source data), and point to that source, and then have some automated process update that source data. This solves the Gary went on vacation factor at least, and there is no security risk. You intentionally only upload data that can be disseminated to the public.

One potential alternative with the public facing dashboard is to make a serverless file. This will often embed the dashboard (and maybe data) into the webpage itself (e.g. pyscript wasm), so it may be slow to start up, but will work reasonably well if you can handle a one minute lag. You don’t need to worry about malicious actors in that scenario, as the heavy computation is done on the clients computer, not your server. I have an example on CRIME De-Coder (note it is currently not up to date, my code is running fine, but Dallas post the cyber-attack has not been updating their public data).

Managing a direct connection to your source data in a public facing dashboard is not a good idea, as malicious actors can spam your dashboard. This denial of service attack will not only make your dashboard unresponsive, but will also eat up your database processing. (Big companies often have a reporting database server vs a production database server in the best case scenario, but this requires resources most public sector agencies do not have.)

The solution to this is to limit who can access the dashboard, part B above. Unfortunately, the majority of dashboard software when you want to make a live connection and/or limit who can see the information, you are in the ‘need to pay for this scenario’. A very unfortunate aspect of the ‘you need to pay for this’ is that most of these vendors charge per viewer – it isn’t a flat fee. PowerBI is maybe OK if your organization already pays for Sharepoint, Tableau licensing is so painful I wouldn’t suggest it.

So what is the alternative? You are now in charge of spinning up your own server so people can click a dropdown menu and generate a line graph. Again you need to worry about security, but at least if you can hide behind a local network or VPN it is probably doable for most police departments.

I don’t want to say dashboards are never worth it. I think public facing dashboards are a good thing for police transparency, and if done right are easier to implement than private ones. Private ones are doable as well (especially if limiting to intranet applications). But like I said, this level of effort is over the top compared to just doing a regular report.

Criminology not on the brink

I enjoy reading Jukka Savolainen’s hot takes, most recently Give Criminology a Chance: Notes from a discipline on the brink. I think Jukka is wrong on a few points, but if you are a criminologist who goes to ASC conferences, definitely go and read it! To be specific, in addition to the title here are two penultimate paragraphs in full that I mostly disagree with:

I arrived in Atlanta with a pessimistic view of academic criminology. During my 30 years in the field, the scholarship has become increasingly political and intolerant of evidence that contradicts the progressive narrative. The past few years have been particularly discouraging for those who care about scientific rigor and truth. Despite these reservations, I approached the ASC meeting with an open mind.

The situation is far from hopeless. True, criminology possesses precious little viewpoint diversity. Much of the scholarship is more interested in pursuing a political agenda than objective truth. The ASC’s outward stance as a politically neutral arbiter of scientific evidence is at odds with its recent history as an activist organization.

Although his take on a generic American Society of Criminology experience is again not misleading and accurate, I am not so sure about the assessment of the trend over time, e.g. “increasingly political and intolerant”. Nor do I think criminology has too “little viewpoint diversity”.

The latter statement is to be frank absurd. For those who haven’t been to an ASC conference, there are no restrictions to who can become a member of the American Society of Criminology. The yearly conference is essentially open as well – you have to submit an abstract for review, but I have never heard of an abstract being turned down (let me know if you are aware of an example!) So you really get the kaleidoscope (as Jukka articulated). Policing scholars, abolitionists, quantitative, qualitative, ghost criminology – criminologists are a heterogeneous bunch.

About the only way to steelman the statement “precious little viewpoint diversity” is to say something more like certain opinions in the field are rewarded/punished, such as being in advanced positions at ASC, or limiting what gets published in the ASC journals (Criminology or Criminology and Public Policy). Or maybe that the average mix of the field slants one way or another (say between pro criminal justice or critical criminal justice).

I have not been around 30 years like Jukka, and I suppose I lost my card carrying criminologist privileges when I went to the private sector, but I haven’t seen any clear change in the nature of field, the ASC conference, or what has been published, in the last ~10 years I have been in a reasonable position to make that judgment. I think Jukka (or anyone) would be hard pressed to quantify his perception – but certainly open to real evidence if I am wrong here (again just my opinion based on fewer years of experience than Jukka).

As a side story, I have heard many of my friends who do work in policing state that they have been criticized for that by colleagues, and subsequently argue our field is “biased against cops”. I don’t doubt my friends personal experiences, but I have personally never been criticized for working with the police. I have been criticized by fellow policing scholars as “downloading datasets” and “not being a real policing scholar”. I know qualitative criminologists who think they are biased against in the field (based on rates of qualitative publishing). I know quantitative criminologists who have given examples of bias in the field against more rigorous empirical methods. I know Europeans who think the field is biased towards Americans. I bet the ghost criminologists think the living are biased against the (un)dead.

I think saying “Much of the scholarship is more interested in pursuing a political agenda than objective truth” is a tinge strong, but sure it happens (I am not quite sure how to map “much” to a numeric value so the statement can be confirmed or refuted). I would say being critical of some work, but then uncritically sharing equally unrigorous work that confirms your pre-conceived notions is an example of this! So if you think one or more is “much”, then I guess I don’t disagree with Jukka here – to be clear though I think the majority of criminologists I have met are interested in pursuing the truth (even if I disagree with the methods they use).

So onto the last sentence of Jukka’s I disagree with, “The ASC’s outward stance as a politically neutral arbiter of scientific evidence is at odds with its recent history as an activist organization.”. But I disagree with this because I personally have a non-normative take on science – I don’t think science is wholly defined by being a neutral arbiter of truth, and doing science in the real world literally involves things that are “activist”.

I believe if you asked most people with Phds what defines science, they would say that science is defined via the scientific method. I personally think that is wrong though. I think about the only thing we share as scientists are being critique-y assholes. The way I do my work is so different from many other criminologists (both quantitative and qualitative), let alone researchers in other scientific fields (like theoretical physics or history), that I think saying we “share a common research method” is a bit of a stretch.

When my son was younger and had science fairs, they were broken into two different types of submissions; traditional science experiments, like measure a plants growth in sunlight vs without, or engineering “build things”. The academic work I am most proud of is in the engineering “build things” camp. These modest contributions in various algorithms – a few have been implemented in major software, and I know some crime analysis units using that work as well – really have nothing to do with the scientific method. Me deriving standard errors for control charts for crime trends is only finding truth in a very tautological way – I think they are useful though.

There is no bright line between my work and “activism” – I don’t think that is a bad thing though and it was the point of the work. You could probably say Janet Lauritsen is an activist for more useful national level CJ statistics. Jukka appears to me to be making normative opinions about he thinks Janet’s activism is more rigorously motivated than Vitale’s – which I agree with, but doesn’t say much if anything about the field of criminology as a whole or recent changes in the field. (If anything it is evidence against Jukka’s opinion, I posit Janet is clearly more influential in the field than Vitale.)


To end with the note “on the brink” – it may be unfair to Jukka (sometimes you don’t get to pick your titles in magazine articles). Part of the way I view being an academic and critiquing work I imagine people find irksome – it involves taking real words people say, trying to reasonably map them to statements that can be confirmed or refuted (often people say things that are quite fuzzy), and then articulating why those statements are maybe right/maybe wrong. It can seem pedantic, but I am a Popper kind-of-guy, and being able to confirm or refute statements I think is the only way we can get closer to objective truth.

To do this with “on the brink” takes more leaps than statements such as “increasingly political and intolerant”. “Criminology” is the general study of criminal behavior – which I am pretty confident will continue on as long as people commit crimes with or without the ASC yearly conference. We can probably limit the “on the brink” statement to something more specific like the American Society of Criminology on the brink. I don’t know about the ASC financials, but I am going to guess Jukka meant by this statement more of a proclamation about the legitimacy of the organization to outside groups.

I am not so sure this is the point of ASC though – it derives its value by being a social club for people who do criminology research. At least that is my impression of going to ASC conferences from my decade as a criminologist. Part of Jukka’s point is that things are getting worse more recently – you can’t lose something you never had to begin with though.

Updates on CRIME De-Coder and ASEBP

So I have various updates on my CRIME De-Coder consulting site, as well as new posts on the American Society of Evidence Based Policing Criminal Justician series.

CRIME De-Coder

Blog post, Don’t use percent change for crime data, use this stat instead. I have written a bunch here about using Poisson Z-scores, so if you are reading this it is probably old news. Do us all a favor and in your Compstat reports drop ridiculous percent change metrics with low baselines, and use 2 * ( sqrt(Current) - sqrt(Past) ).

Blog post, Dashboards should be up to date. I will have a more techy blog post here on my love/hate relationship with dashboards (most of the time static reports are a better solution). But one scenario they do make sense is for public facing dashboards, but they should be up to date. The “free” versions of popular tools (Tableau, PowerBI) don’t allow you to link to a source dataset and get auto-updated, so you see many old dashboards out of date online. If you contract with me, I can automate it so it is up to date and doesn’t rely on an analyst manually updating the information.

Demo page – that page currently includes demonstrations for:

The WDD Tool is pure javascript – picking up more of that slowly (the Folium map has a few javascript listener hacks to get it to look the way I want). As a reference for web development, I like Jon Duckett’s three books (HTML, javascript, PHP).

Ultimately too much stuff to learn, but on the agenda are figuring out google cloud compute + cloud databases a bit more thoroughly. Then maybe add some PHP to my CRIME De-Coder site (a nicer contact me form, auto-update sitemap, and rss feed). I also want to learn how to make ArcGIS dashboards as well.

Criminal Justician

The newest post is Situational crime prevention and offender planning – discussing one of my favorite examples of crime prevention through environmental design (on suicide prevention) and how it is a signal about offender behavior that goes beyond simplistic impulsive behavior. I then relate this back to current discussion of preventing mass shootings.

If you have ideas about potential posts for the society (or this blog or crime de-coders blog), always feel free to make a pitch

Hacking folium for nicer legends

I have accumulated various code to hack folium based maps over several recent projects, so figured would share. It is a bit too much to walk through the entire code inline in a blog post, but high level the extras this code does:

  • Adds in svg elements for legends in the layer control
  • Has a method for creating legends for choropleth maps
  • Inserts additional javascript to make a nice legend title (here a clickable company logo) + additional attributions

Here is the link to a live example, and below is a screenshot:

So as a quick rundown, if you are adding in an element to a folium layer, e.g.:

folium.FeatureGroup(name="Your Text Here",overlay=True,control=True)

You can insert arbitrary svg code into the name parameter, e.g. you can do something like <span><svg .... /svg>HotSpots</span>, and it will correctly render. So I have functions to make the svg icon match the passed color. So you can see I have a nice icon for city boundary’s, as well as a blobby thing for hotspots.

There in the end are so many possible parameters, I try to make reasonable functions without too crazy many parameters. So if someone wanted different icons, I might just make a different function (probably wouldn’t worry about passing in different potential svg).

I have a function for choropleth maps as well – I tend to not like the functions that you pass in a continuous variable and it does a color map for you. So here it is simple, you pass in a discrete variable with the label, and a second dictionary with the mapped colors. I tend to not use choropleth maps in interactive maps very often, as they are more difficult to visualize with the background map. But there you have it if you want it.

The final part is using javascript to insert the Crime Decoder logo (as a title in the legend/layer control), as well as the map attribution with additional text. These are inserted via additional javascript functions that I append to the html (so this wouldn’t work say inline in a jupyter notebook). The logo part is fairly simple, the map attribution though is more complicated, and requires creating an event listener in javascript on the correct elements.

The way that this works, I actually have to save the HTML file, then I reread the text back into python, add in additional CSS/javascript, and then resave the file.

If you want something like this for your business/website/analysts, just get in contact.

For next tasks, I want to build a demo-dashboard for Crime De-Coder (probably a serverless dashboard using wasm/pyscript). But in terms of leaflet extras, the ability to embed SVG into different elements, you can create charts in popups/tooltips, which would be a cool addition to my hotspots (click and it has a time series chart inside).

Saving data files in wheel packages

As a small update, I continue to use my retenmod python package as a means to illustrate various python packaging and CICD tricks. Most recently, I have added in an example of saving local data files inside of the wheel package. So instead of just packaging the .py files in the wheel package, it also bundles up two data files: a csv file and a json data file for illustration.

The use case I have seen for this, sometimes I see individual .py files in peoples packages that have thousands of lines – they just are typically lookup tables. It is is better to save those lookup tables in more traditional formats than it is to coerce them into python objects.

It is not too difficult, but here are the two steps you need:

Step 1, in setup.cfg in the root, I have added this package_data option.

[options.package_data]
* = *.csv, *.json

Step 2, create a set of new functions to load in the data. You need to use pkg_resources to do this. It is simple enough to just copy-paste the entire data_funcs.py file here in a blog post to illustrate:

import json
import numpy as np
import pkg_resources

# Reading the csv data
def staff():
    stream = pkg_resources.resource_stream(__name__, "agency.csv")
    df = np.genfromtxt(stream, delimiter=",", skip_header=1)
    return df

# Reading the metadata
def metaf():
    stream = pkg_resources.resource_stream(__name__, "notes.json")
    res = json.load(stream)
    return res

# just having it available as object
metadata = metaf()

So instead of doing something like pd.read_csv('agency.csv') (or here I use numpy, as I don’t have pandas as a package dependency for retenmod). You create a stream object, and the __name__ is just the way for python to figure out all of the relative path junk. Depending on the different downstream modules, you may need to stream.read(), but here for both json and numpy you can just pass them to their subsequent read functions and it works as intended.

And again you can checkout the github actions to see in the works of testing the package, and generating the wheel file all in one go.


If you install the latest via the github repo:

pip install https://github.com/apwheele/retenmod/blob/main/dist/retenmod-0.0.1-py3-none-any.whl?raw=true

And to test out this, you can do:

from retenmod import data_funcs

data_funcs.metadata # dictionary from json
data_funcs.staff() # loading in numpy array from CSV file

If you go check out wherever you package is installed to on your machine, you can see that it will have the agency.csv and the notes.json file, along with the .py files with the functions.

Next on the todo list, auto uploading to pypi and incrementing minor tags via CICD pipeline. So if you know of example packages that do that already let me know!