Some notes on project management

Have recently participated in several projects that I think went well at the day gig – these were big projects, multiple parties, and we came together and got to deployment on a shortened timeline. I think it is worth putting together my notes on why I think this went well.

My personal guiding star for project management is very simple – have a list of things to do, and try to do them as fast as reasonably possible. This is important, as anything that distracts from this ultimate goal is not good. Does not matter if it is agile ceremonies or waterfall excessive requirement gathering. In practice if you are too focused on the bureaucracy either of them can get in the way of the core goals – have a list of things to do and do them in a reasonable amount of time.

For a specific example, we used to have bi-weekly sprints for my team. We stopped doing them for these projects that have IMO gone well, as another group took over project management for the multiple groups. The PM just had an excel spreadsheet, with dates. I did not realize how much waste we had by forcing everything to a two week cycle. We spent too much time trying to plan two weeks out, and ultimately not filling up peoples plates. It is just so much easier to say “ok we want to do this in two days, and then this in the next three days” etc. And when shit happens just be like “ok we need to push X task by a week, as it is much harder than anticipated”, or “I finished Y earlier, but I think we should add a task Z we did not anticipate”.

If sprints make sense for your team go at it – they just did not for my team. They caused friction in a way that was totally unnecessary. Just have a list of things to do, and do them as fast as reasonably possible.

Everything Parallel

So this has avoided the hard part, what to put on the to-do list? Let me discuss another very important high level goal of project management first – you need to do everything in parallel as much as possible.

For a concrete example that continually comes up in my workplace, you have software engineering (writing code) vs software deployment (how that code gets distributed to the right people). I cannot speak to other places (places with a mono-repo/SaaS it probably looks different), but Gainwell is really like 20+ companies all Frankenstein-ed together through acquisitions and separate big state projects over time (and my data science group is pretty much solution architects for AI/ML projects across the org).

It is more work for everyone in this scenario trying to do both writing code and deployment at the same time. Software devs have to make up some reasonably scoped requirements (which will later change) for the DevOps folks to even get started. The DevOps folks may need to work on Docker images (which will later change). So it is more work to do it in parallel than it is sequential, but drastically reduces the overall deliverable timelines. So e.g. instead of 4 weeks + 4 weeks = 8 weeks to deliver, it is 6 weeks of combined effort.

This may seem like “duh Andy” – but I see it all the time people not planning out far enough though to think this through (which tends to look more like waterfall than agile). If you want to do things in months and not quarters, you need everyone working on things in parallel.

For another example at work, we had a product person want to do extensive requirements gathering before starting on the work. This again can happen in parallel. We have an idea, devs can get started on the core of it, and the product folks can work with the end users in the interim. Again more work, things will change, devs may waste 1 or 2 or 4 weeks building something that changes. Does not matter, you should not wait.

I could give examples of purely “write code as well”, e.g. I have one team member write certain parts of the code first, which are inconvenient, because that component not being finished is a blocker for another member of the team. Basically it is almost always worth working harder in the short term if it allows you to do things in parallel with multiple people/teams.

Sometimes the “in parallel” is when team members have slack, have them work on more proof of concept things that you think will be needed down the line. For the stuff I work on this can IMO be enjoyable, e.g. “you have some time, lets put a proof of concept together on using Codex + Agents to do some example work”. (Parallel is not quite the word for this, it is forseeing future needs.) But it is similar in nature, I am having someone work on something that will ultimately change in ways in the future that will result in wasted effort, but that is OK, as the head start on trying to do vague things is well worth it.

What things to put on the list

This is the hardest part – you need someone who understands front to back what the software solution will look like, how it interacts with the world around it (users, databases, input/output, etc.) to be able to translate that vision into a tangible list of things to-do.

I am not even sure if I can articulate how to do this in a general enough manner to even give useful advice. When I don’t know things front to back though I will tell you what, I often make mistakes going down paths that often waste months of work (which I think is sometimes inevitable, no one had the foresight to know it was a bad path until we got quite a ways down it).

I used to think we should do the extensive, months long, requirements gathering to avoid this. I know a few examples where I talked for months with the business owners, came up with a plan, and then later on realized it was based on some fundamental misunderstanding of the business. And the business team did not have enough understanding of the machine learning model to know it did not make sense.

I think mistakes like these are inevitable though, as requirements gathering is a two way street (it is not reasonable for any of the projects I work on to expect the people requesting things to put together a full, scoped out list). So just doing things and iterating is probably just as fast as waiting for a project to be fully scoped out.

Do them as fast as possible

So onto the second part, of “have a list of things to-do and do them as fast as possible”. One of the things with “fast as possible”, people will fill out their time. If you give someone two weeks to do something, most people will not do it faster, they will spend the full two weeks doing that task.

So you need someone technical saying “this should be done in two days”. One mistake I see teams make, is listing out projects that will take several weeks to-do. This is only OK for very senior people. Majority of devs tasks should be 1/2/3 days of work at max. So you need to take a big project and break it down into smaller components. This seems like micro-managing, but I do not know how else to do it and keep things on track. Being more specific is almost always worth my time as opposed to less specific.

Sometimes this even works at higher levels, one of the projects that went well, initial estimates were 6+ months. Our new Senior VP of our group said “nope, needs to be 2-3 months”. And guess what? We did it (he spent money on some external contractors to do some work, but by god we did it). Sometimes do them as fast as possible is a negotiation at the higher levels of the org – well you want something by end of Q3, well we can do A and C, but B will have to wait until later then, and the solution will be temporarily deployed as a desktop app instead of a fully served solution.

Again more work for our devs, but shorter timeline to help others have an even smaller MVP totally makes sense.

AI will not magically save you

Putting this last part in, as I had a few conversations recently about large code conversion projects teams wanted to know if they could just use AI to make short work of it. The answer is yes it makes sense to use AI to help with these tasks, but they expected somewhat of a magic bullet. They each still needed to make a functional CICD framework to test isolated code changes for example. They still needed someone to sit down and say “Joe and Gary and Melinda will work on this project, and have XYZ deliverables in two months”. A legacy system that was built over decades is not a weekend project to just let the machine go brr and churn out a new codebase.

Some of them honestly are groups that just do not want to bite the bullet and do the work. I see projects that are mismanaged (for the criminal justice folks that follow me, on-prem CAD software deployments should not take 12 months). They take that long because the team is mismanaged, mostly people saying “I will do this high level thing in 3 months when I get time”, instead of being like “I will do part A in the next two days and part B in the three days following that”. Or doing things sequentially that should be done in parallel.

To date, genAI has only impacted the software engineering practices of my team at the margins (potentially writing code slightly faster, but probably not). We are currently using genAI in various products though for different end users. (We have deployed many supervised learning models going back years, just more recently have expanded into using genAI for different tasks though in products.)

I do not foresee genAI taking devs jobs in the near future, as there is basically infinite amounts of stuff to work on (everything when you look closely is inefficient in a myriad of ways). Using the genAI tools to write code though looks very much like project management, identifying smaller and more manageable tasks for the machine to work on, then testing those, and moving onto the next steps.

Using DuckDB WASM + Cloudflare R2 to host and query big data (for almost free)

The motivation here, prompted by a recent question Abigail Haddad had on LinkedIn:

For the machines, the context is hosting a dataset of 150 million rows (in another post Abigail stated it was around 72 gigs). And you want the public to be able to make ad-hoc queries on that data. Examples where you may want to do this are public dashboards (think a cities open data site, just puts all the data on R2 and has a front end).

This is the point where traditional SQL databases for websites probably don’t make sense. Databases like Supabase Postgres or MySQL can have that much data, given the cost of cloud computing though and what they are typically used for, it does not make much sense to put 72 gigs and use them for data analysis type queries.

Hosting the data as static files though in an online bucket, like Cloudflare’s R2, and then querying the data makes more sense for that size. Here to query the data, I also use a WASM deployed DuckDB. What this means is I don’t really have to worry about a server at all – it should scale to however many people want to use the service (I am just serving up HTML). The client’s machine handles creating the query and displaying the resulting data via javascript, and Cloudflare basically just pushes data around.

If you want to see it in action, you can check out the github repo, or see the demo deployed on github pages to illustrate generating queries. To check out a query on my Cloudflare R2 bucket, you can run SELECT * FROM 'https://data-crimedecoder.com/books.parquet' LIMIT 10;:

Cloudflare is nice here, since there are no egress charges (big data you need to worry about that). You do get charged for different read/write operations, but the free tiers seem quite generous (I do not know quite how to map these queries to Class B operations in Cloudflare’s parlance, but you get 10 million per month and all my tests only generated a few thousand).

For some notes on this set-up. On Cloudflare, to be able to use DuckDB WASM, I needed to expose the R2 bucket via a custom domain. Using the development url did not work (same issue as here). I also set my CORS Policy to:

[
  {
    "AllowedOrigins": [
      "*"
    ],
    "AllowedMethods": [
      "GET",
      "HEAD"
    ],
    "AllowedHeaders": [
      "*"
    ],
    "ExposeHeaders": [],
    "MaxAgeSeconds": 3000
  }
]

While my Crime De-Coder site is PHP, all the good stuff happens client-side. So you can see some example demo’s of the GSU book prices data.

One of the annoying things about this though, with S3 you can partition the files and query multiple partitions at once. Here something like SELECT * FROM read_parquet('https://data-crimedecoder.com/parquet/Semester=*/*') LIMIT 10; does not work. You can union the partitions together manually. So not sure if there is a way to set up R2 to work the same way as the S3 example (set up a FTP server? let me know in the comments!).

For pricing, for the scenario Abigail had of 72 gigs of data, we then have:

  • $10 per year for the domain
  • 0.015*72*12 = $13 for storage of the 72 gigs

So we have a total cost to run this of $23 per year. And it can scale to a crazy number of users and very large datasets out of the box. (My usecase here is just $10 for the domain, you get 10 gigs for free.)

Since this can be deployed on a static site, there are free options (like github pages). So the page with the SQL query part is essentially free. (I am not sure if there is a way to double dip on the R2 custom domain, such as just putting the HTML in the bucket. Yes, you can just put the HTML in the bucket and it will render like normal.)

While this example only shows generating a table, you can do whatever additional graphics client side. So could make a normal looking dashboard with dropdowns, and those just execute various queries and fill in the graphs/tables.

Follow up on Reluctant Criminologists critique of build stuff

Jon Brauer and Jake Day recently wrote a response to my build stuff post, Should more criminologists build stuff on their Reluctant Criminologists blog. Go ahead and follow Jon’s and Jake’s thoughtful work. They asked for comment before posting – I mainly wanted to post my response to be more specific about “how much” I think criminologists should build stuff. (Their initial draft said I should “abandon” theoretical research, which is not what I meant (and they took out before publishing), but could see how one could be confused by my statement “emphasis should he flipped” after saying near 0% of work now is building stuff.)

So here is my response to their post:

To start I did not say abandon theoretical research, I said “the emphasis be on doing”. So that is a relative argument, not an absolute. It is fine to do theoretical work, and it is fine to do ex-ante policy evaluations (which should be integrated into the process of building things, seeing if something works well enough to justify its expense I would say is a risky test). I do not have a bright line that I think building stuff is optimal for the field, but it should be much more common than it is now (which is close to 0%). To be specific on my personal opinion, I do think “build stuff” should be 50%+ of research criminologists’ time (relative to writing papers).

I am actually more concerned with the larping idea I gave. You have a large number of papers in criminology that justify their motivation not really as theoretical, but as practical to operations. And they are just not even close. So let’s go with the example of precise empirical distributions of burglaries at the neighborhood level. (It is an area I am familiar with, and there are many “I have a new model for crime in space” papers.) Pretend I did have a forecast, and I said there are going to be 10 burglaries in your neighborhood next month. What exactly are you supposed to do with that information? Forecasting the distribution does not intrinsically make it obvious how to prevent crime (nature does not owe us things we can manipulate). Most academic criminologists would say it is useful for police allocation, which is so vague as to be worthless.

You also do not need a fully specified causal chain to prevent crime. Most of the advancement in crime prevention looks more like CPTED applications than understanding “root causes” (which that phrase I think is a good example of an imprecise theory). I would much rather academics try to build specific CPTED applications than write another regression paper on crime and space (even if it is precise).

For the dark side part, in the counterfactual world in which academics don’t focus on direct applications, it does not mean those applications do not get built. They just get built by people who are not criminologists. It was actually the main reason I wrote the post – folks are building things now that should have more thoughtful input from academic criminologists.

For a specific example, different tech companies are demo’ing products with the ultimate goal of improving police officers mental health. These include flagging if officers go to certain types of calls too often, or using a chatbot as a therapist. Real things I would like criminologists like yourselves being involved in product development, so you can say “How do we know this is actually improving the mental health of officers?”. I vehemently disagree that more academic criminologists being involved will make development of these applications worse.

The final part I want to say is that apps need not intrinsically be focused on anything. I gave examples in policing that I am aware of, because that is my stronger area of expertise, but it can be anything. So let’s go with personal risk assessments. Pre-trial, parole/probation risk assessments look very similar to what Burgess built 100 years ago at this point. So risk stratification is built on the idea that you need to triage resources (some people need more oversight, some less), especially for the parole scenario. Now it is certainly feasible someone comes up with a better technological solution that risk stratification is not needed at all (say better sensors or security systems that obviate the need for the more intensive human oversight). Or a more effective regimen that applies to everyone, say better dynamic risk assessments, so people are funneled faster to more appropriate treatment regimes than just having a parole officer pop in from time to time.

I give this last example because I think it is a an area where focusing on real applications I suspect will be more fruitful long term for theory development. So we have 100 years and thousands of papers on risk assessment, but really only very incremental progress in that area. I believe a stronger focus on actual application – thinking about dynamic measures to accomplish specific goals (like the treatment monitoring and assignment), is likely to be more fruitful than trying to pontificate about some new theory of man that maybe later can be helpful.

We don’t have an atom to reduce observations down to (nor do we have an isolated root node in a causal diagram). We are not going to look hard enough and eventually find Laplace’s Demon. Focusing on a real life application, how people are going to use the information in practice, I think is a better way for everyone to frame their scientific pursuits. It is more likely a particular application changes how we think about the problem all together, and then we mold the way we measure to help accomplish that specific task. Einstein just started with the question “how do we measure how fast things travel when everything is moving”, a very specific question. He did not start out by saying “I want a theory of the universe”.

I am more bullish on real theoretical breakthroughs coming from more mundane and practical questions like “how do we tell if a treatment is working” or “how do we know if an officer is having a mental health crisis” than I am about someone coming up with a grander theory of whatever just from reading peer reviewed papers in their tower.

And here is Jon’s response to that:

Like you, we try to be optimistic, encouraging, and constructive in tone, though at times it requires serious effort to keep cynicism at bay. In general, if we had more Andrew Wheeler’s thoughtfully building things and then evaluating them, then I agree this would be a good thing. Yet, if I don’t trust someone enough to meaningfully observe, record, and analyze the gauges, then I’m certainly not going to trust them to pilot – or to understand well enough to successfully build and improve upon the car/airplane/spaceship. Meanwhile, the normative analysis is that everything is significant/everything works – unless it’s stuff we collectively don’t like. In that context, the cynic in me things we are better off of we simply focus on teaching many (most?) social scientists to observe and analyze better – and may even do less harm despite wasted resources by letting them larp.

Jon and Jake do not have a an estimate in their post on what they think the mix should be building vs theorizing (they say pluralist in the post). I think the near 0 we do now is not good.

Much of this back and forth tends to mirror the current critique of advocacy in science. The Charles Tittle piece they cite, The arrogance of public sociology, could have been written yesterday.

Both the RC group and Tittle’s have what I would consider a perfect enemy of the good argument going on. People can do bad work, people can do good work. I want folks to go out and do good, meaningful work. I have met plenty of criminologists (and the flipside the level of competence of many software engineers) to not have RC’s level of cynicism.

As an individual, I don’t think it makes much sense to worry about the perception of the field as a whole. I cannot control my fellow criminologists, I can only control what I personally do. Tittle in his critique thought public sociology would erode any legitimacy of the field. He maybe was right, but I posit producing mostly irrelevant work will put criminology on the same path.

Build Stuff

I have had this thought in my head for a while – criminology research to me is almost all boring. Most of the recent advancement in academia is focused on making science more rigorous – more open methods, more experiments, stronger quasi-experimental designs. These are all good things, but to me still do not fundamentally change the practical implementation of our work.

Criminology research is myopically focused on learning something – I think this should be flipped, and the emphasis be on doing something. We should be building things to improve the crime and justice system.

How criminology research typically goes

Here is a screenshot of the recent articles published in the Journal of Quantitative Criminology. I think this is a pretty good cross-section of high-quality, well-respected research in criminology.

Three of the four articles are clearly ex-ante evaluations of different (pretty normal) policies/behavior by police and their subsequent downstream effects on crime and safety. They are all good papers, and knowing how effective a particular policy works (like stop and frisk, or firearm seizures) are good! But they are the literal example where the term ivory tower comes from – these are things happening in the world, and academics passively observe and say how well they are working. None of the academics in those papers were directly involved in any boots on the ground application – they were things normal operations the police agencies in question were doing on their own.

Imagine someone said “I want to improve the criminal justice system”, and then “to accomplish this, I am going to passively observe what other people do, and tell them if it is effective or not”. This is almost 100% of what academics in criminology do.

The article on illicit supply chains is another one that bothers me – it is sneaky in the respect that many academics would say “ooh that is interesting and should be helpful” given its novelty. I challenge anyone to give a concrete example of how the findings in the article can be directly useful in any law enforcement context. Not hypothetical, “can be useful in targeting someone for investigation”, like literal “this specific group can do specific X to accomplish specific Y”. We have plenty of real problems with illicit supply chains – drug smuggling in and out of the US (recommend the contraband show on Amazon, who knew many manufactures smuggle weed from US out to the UK!). Fentanyl or methamphetamine production from base materials. Retail theft groups and selling online. Plenty of real problems.

Criminology articles tend to be littered with absurdly vague accusations that they can help operations. They almost always cannot.

So we have articles that are passive evaluations of policies other people thought up. I agree this is good, but who exactly comes up with the new stuff to try out? We just have to wait around and hope other people have good ideas and take the time to try them out. And then we have theoretical articles larping as useful in practice (since other academics are the ones reviewing the papers, and no one says “erm, that is nice but makes no sense for practical day to day usage”).

Some may say this is the way science is supposed to work. My response to that is I don’t know dude, go and look at what folks are doing in the engineering or computer science or biology department. They seem to manage both theoretical and practical advancements at the same time just fine and dandy.

Well what have you built Andy?

It is a fair critique if you say “most of your work is boring Andy”. Most of my work is the same “see how a policy works from the ivory tower”, but a few are more “build stuff”. Examples of those include:

In the above examples, the one that I know has gotten the most traction are simple rules to identify crime spikes. I know because I have spent time demonstrating that work to various crime analysts across the country, and so many have told me “I use your Poisson Z-score Andy”. (A few have used the patrol area work as well, so I should be in the negative for carbon generation.)

Papers are not what matter though – papers are a distraction. The applications are what matter. The biggest waste currently in academic criminology work is peer reviewed papers. Our priorities as academics are totally backwards. We are evaluated on whether we get a paper published, we should be evaluated on whether we make the world a better place. Papers by themselves do not make the world a better place.

Instead of writing about things other people are doing and whether they work, we should spend more of our time trying to create things that improve the criminal justice system.

Some traditional academics may not agree with this – science is about formulating and testing hypotheses. This need not be in conflict with doing stuff. Have a theory about human nature, what better way to prove the theory than building something to attempt to change things for the better according to your theory. If it works in real life to accomplish things people care about guess what – other people will want to do it. You may even be able to sell it.

Examples of innovations I am excited about

Part of what prompted this was I was talking to a friend, and basically none of the things we were excited about have come from academic criminologists. I think a good exemplar of what I mean here is Anthony Tassone, the head of Truleo. To be clear, this is not a dig but a compliment, following some of Anthony’s posts on social media (LinkedIn, X), he is not a Rhodes Scholar. He is just some dude, building stuff for criminal justice agencies mostly using the recent advancements in LLMs.

For a few other examples of products I am excited about how they can improve criminal justice (I have no affiliations with these beyond I talk to people). Polis for evaluating body worn camera feeds. Dan Tatenko for CaseX is building an automated online crime reporting system that is much simpler to use. The folks at Carbyne (for 911 calls) are also doing some cool stuff. Matt White at Multitude Insights is building a SaaS app to better distribute BOLOs.

The folks at Polis (Brian Lande and Jon Wender) are the only two people in this list that have anything remotely to do with academic criminology. They each have PhDs (Brian in sociology and Jon in criminology). Although they were not tenure track professors, they are former/current police officers with PhDs. Dan at CaseX was a detective not that long ago. The folks at Carbyne I believe are have tech backgrounds. Matt has a military background, but pursued his start up after doing an MBA.

The reason I bring up Anthony Tassone is because when we as criminologists say we are going to passively evaluate what other people are doing, we are saying “we will just let tech people like Anthony make decisions on what real practitioners of criminal justice pursue”. Again not a dig on Anthony – it is a good thing for people to build cool stuff and see if there is a market. My point is that if Anthony can do it, why not academic criminologists?

Rick Smith at Axon is another example. While Axon really got its dominate market due to conducted energy devices and then body worn cameras (so hardware), quite a bit of the current innovation at Axon is software. And Rick did not have a background in hardware engineering either, he just had an idea and built it.

Transferring over into professional software engineering since 2020, let me tell my fellow academics, you too can write software. It is more about having a good idea that actually impacts practice.

Where to next?

Since the day gig (working on fraud-waste-abuse in Medicaid claims) pays the bills, most of my build stuff is now focused on that. The technical skills to learn software engineering are currently not effectively taught in Criminal Justice PhD programs, but they could be. Writing a dissertation is way harder than learning to code.

While my python book has a major focus on data analysis, it is really the same skills to jump to more general software engineering. (I specifically wrote the book to cover more software engineering topics, like writing functions and managing environments, as most of the other python data science books lack that material.)

Skills gap is only part of the issue though. The second is supporting work that pursues building stuff. It is really just norms in the current academe that stop this from occurring now. People value papers, NIJ (at least used to) mostly fund very boring incremental work.

I discussed start ups (people dreaming and building their own stuff) and other larger established orgs (like Axon). Academics are in a prime position to pursue their own start ups, and most Universities have some support for this (see Joel Caplan and Simsi for an example of that path). Especially for software applications, there are few barriers. It is more about time and effort spent pursuing that.

I think the more interesting path is to get more academic criminologists working directly with software companies. I will drop a specific example since I am pretty sure he will not be offended, everyone would be better off if Ian Adams worked directly for one of these companies (the companies, Ian’s take home pay, long term advancement in policing operations). Ian writes good papers – it would be better if Ian worked directly with the companies to make their tools better from the get go.

My friend I was discussing this with gave the example of Bell Labs. Software orgs could easily have professors take part time gigs with them directly, or just go work with them on sabbaticals. Axon should support something like that now.

While this post has been focused on software development, I think it could look similar for collaborating with criminal justice agencies directly. The economics will need to be slightly different (they do not have quite as much expendable capital to support academics, the ROI for private sector I think should be easily positive in the long run). But that I think that would probably be much more effective than the current grant based approach. (Just pay a professor directly to do stuff, instead of asking NIJ to indirectly support evaluation of something the police department has decided to already put into operation.)

Scientific revolutions are not happening in journal articles. They are happening by people building stuff and accomplishing things in the real world with those innovations.


For a few responses to this post, Alex sent me this (saying my characterization of Larry as passively observing is not quite accurate), which is totally reasonable:

Nice post on building/ doing things and thanks for highlighting the paper with Larry. One error however, Larry was directly involved in the doing. He was the chief science officer for the London Met police and has designed their new stop and frisk policy (and targeting areas) based directly on our work. Our work was also highlighted by the Times London as effective crime policy and also by the Chief of the London Met Police as well who said it was one of the best policy relevant papers he’s ever seen. All police are now being by trained on the new legislation on stop and search in procedurally just ways. You may not have known this background but it’s directly relevant to your post.

Larry Sherman (and David Weisburd), and their work on hot spots + direct experiments with police are really exemplars of “doing” vs “learning”. (David Kennedy and his work on focused deterrence is another good example.) In the mid 90s when Larry or David did experiments, they likely were directly involved in a way that I am suggesting – the departments are going and asking Larry “what should we do”.

My personal experience, trying to apply many of the lessons of David’s and Larry’s work (which was started around 30 years ago at this point), is not quite like that. It is more of police departments have already committed to doing something (like hotspots), and want help implementing the project, and maybe some grant helps fund the research. Which is hard and important work, but honestly just looks like effective project management (and departments should just invest in researchers/project managers directly, the external funding model does not make sense long term). For a more on point example of what I mean by doing, see what Rob Guerette did as an embedded criminologist with Miami PD.

Part of the reason I wrote the post, if you think about the progression of policing, we have phases – August Vollmer for professionalization in the early 1900’s. I think you could say folks like Larry and David (and Bill Bratton) brought about a new age of metrics to PDs in the 90s.

There are also technology changes that fundamentally impact PDs. Cars + 911 is one. The most recent one is a new type of oversight via body worn cameras. Folks who are leading this wave of professionalization changes are tech folks (like Rick Smith and Anthony Tassone). I think it is a mistake to just sit on the sidelines and see what these folks come up with – I want academic criminologists to be directly involved in the nitty gritty of the implementations of these systems and making them better.

A second response to this is that building stuff is hard, which I agree and did not mean to imply it was as easy as writing papers (it is not). Here is Anthony Tassone’s response on X:

I know this is hard. This is part of why I mentioned the Bell labs path. Working directly for an already established company is much easier/safer than doing your own startup. Bootstrapping a startup is additionally much different than doing VC go big or go home – which academics on sabbaticals and as a side hustle are potentially in a good position to do this.

Laura Huey did this path, and does not have nice things to say about it:

I have not talked to Laura specifically about this, but I suspect it is her experience running the Canadian Society of Evidence Based Policing. Which I would not suggest starting a non-profit either honestly. Even if you start a for-profit, there is no guarantee you will be in a good position in your current academic position to be well supported.

Again no doubt building useful stuff is harder than writing papers. For a counter to these though, doing my bootstrapped consulting firm is definitely not as stressful as building a large company like Anthony. And working for a tech company directly was a good career move for me (although now I spend most of my day building stuff to limit fraud-waste-abuse in Medicaid claims, not improving policing).

My suggestion that the field should be more focused on building stuff was not because it was easier, it was because if you don’t there is a good chance you are mostly irrelevant.

AMA OLS vs Poisson regression

Crazy busy with Crime De-Coder and day job, so this blog has gone by the wayside for a bit. I am doing more python training for crime analysts, most recently in Austin.

If you want to get a flavor of the training, I have posted a few example videos on YouTube. Here is an example of going over Quarto markdown documents:

I do these custom for each agency. So I log into your system, do actual queries with your RMS to illustrate. Coding is hard to get started, so part of the idea behind the training is to figure out all of the hard stuff (installation, connecting to your RMS, setting up batch jobs), so it is easier for analysts to get started.


This post was a good question I recently received from Lars Lewenhagen at the Swedish police:

In my job I often do evaluations of place-based interventions. Sometimes there is a need to explore the dosage aspect of the intervention. If I want to fit a regression model for this the literature suggests doing a GLM regression predicting the crime counts in the after period with the dosage and crime counts in the before period as covariates. This looks right to me, but the results are often contradictory. Therefore, I contemplated making change in crime counts the dependent variable and doing simple linear regression. I have not seen anyone doing this, so it must be wrong, but why?

And my response was:

Short answer is OLS is probably fine.

Longer answer to tell whether it makes more sense for OLS vs GLM what matters is mostly the functional relationship between the dose response. So for example, say your doses were at 0,1,2,3

A linear model will look like for example

E[Y] = 10 + 3*x

Dose, Y
 0  , 10
 1  , 13
 2  , 16
 3  , 19

E[Y] is the “expected value of Y” (the parameter that is akin to the sample mean). For a Poisson model, it will look like:

log(E[Y]) = 2.2 + 0.3*x

Dose, Y
 0  ,  9.0
 1  , 12.2
 2  , 16.4
 3  , 22.2

So if you plot your mean crime at the different doses, and it is a straight line, then OLS is probably the right model. If you draw the same graph, but use a logged Y axis and it is a straight line, Poisson GLM probably makes more sense.

In practice it is very hard to tell the difference between these two curves in real life (you need to collect dose response data at many points). So just going with OLS is not per se good or bad, it is just a different model and for experiments with only a few dose locations it won’t make much of a difference to describe the experiment itself.

Where the model makes a bigger difference is extrapolating. Go with our above two models, and look at the prediction for dose=10. The differences between the two models make a much larger difference.

I figured this would be a good one for the blog. Most of the academic material will talk about the marginal distribution of the variable being modeled (which is not quite right, as the conditional distribution is what matters). Really for alot of examples I look, linear models are fine, hence why I think the WDD statistic is reasonable (but not always).

For quasi-experiments it is the ratio between treated and control as well, but for a simpler dose-response scenario, you can just plot the means at binned locations of the doses and then see if it is a straight or curved line. In sample it often doesn’t even matter very much, it is all just fitting mean values. Where it is a bigger deal is extrapolation outside of the sample.

Using Esri + python: arcpy notes

I shared a series of posts this week using Esri + arcpy tools on my Crime De-Coder LinkedIn page. LinkedIn eventually removes the posts though, so I am putting those same tips here on the blog. Esri’s tools do not have great coverage online, so blogging is a way to get more coverage in those LLM tools long term.


A little arcpy tip, if you import a toolbox, it can be somewhat confusing what the names of the methods are available. So for example, if importing some of the tools Chris Delaney has created for law enforcement data management, you can get the original methods available for arcpy, and then see the additional methods after importing the toolbox:

import arcpy
d1 = dir(arcpy) # original methods
arcpy.AddToolbox("C:\LawEnforcementDataManagement.atbx")
d2 = dir(arcpy) # updated methods available after AddToolbox
set(d2) - set(d1) # These are the new methods
# This prints out for me
# {'ConvertTimeField_Defaultatbx', 'toolbox_code', 'TransformCallData_Defaultatbx', 'Defaultatbx', 'TransformCrimeData_Defaultatbx'}
# To call the tool then
arcpy.TransformCrimeData_Defaultatbx(...)

Many of the Arc tools have the ability to copy python code, when I use Chris’s tool it copy-pastes arcpy.Defaultatbx.TransformCrimeData, but if running from a standalone script outside of an Esri session (using the python environment that ArcPro installs) that isn’t quite the right code to call the function.

You can check out Chris’s webinar that goes over the law enforcement data management tool, and how it fits into the different crime analysis solutions that Chris and company at Esri have built.


I like using conda for python environments on Window’s machines, as it is easier to install some particular packages. So I mostly use:

conda create --name new_env python=3.11 pip
conda activate new_env
pip install -r requirements.txt

But for some libraries, like geopandas, I will have conda figure out the install. E.g.

conda create --name geo_env python=3.11 pip geopandas
conda activate geo_env
pip install -r requirements.txt

As they are particularly difficult to install with many restrictions.

And if you are using ESRI tools, and you want to install a library, conda is already installed and you can clone that environment.

conda create --clone "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3" --name proclone
conda activate proclone
pip install -r requirements.txt

As you do not want to modify the original ESRI environment.


Using conda to run scheduled jobs in Windows is alittle tricky. Here is an example of setting up a .bat file (which can be set up in Windows scheduler) to activate conda, set a new conda environment, and call a python script.

::: For log, showing date/time
echo:
echo --------------------------
echo %date% %time%
::: This sets the location of the script, as conda may change it
set "base=%cd%"
::: setting up conda in Windows, example Arc's conda activate
call "C:\Program Files\ArcGIS\Pro\bin\Python\Scripts\activate.bat"
::: activating a new environment
call conda activate proclone
::: running a python script
call cd %base%
call python auto_script.py
echo --------------------------
echo:

Then, when I set up the script in Window’s scheduler, I often have the log file at that level. So the task scheduler I will have the action as:

"script.bat" >> log.txt 2>&1

And have the options where the script runs from the location of script.bat. This will append both the normal log and error log to the shell script. So if something goes wrong, you can open log.txt and see what is up.


When working with arcpy, often you need to have tables inside of a geodatabase to use particular geoprocessing tools. Here is an example of taking an external csv file, and importing that file into a geodatabase as a table.

import arcpy
gdb = "./project/LEO_Tables.gdb"
tt = "TempTable"
arcpy.env.workspace = gdb

# Convert CSV into geodatabase
arcpy.TableToTable_conversion("YourData.csv",gdb,tt)
#arcpy.ListTables() # should show that new table

# convert time fields into text, useful for law enforcement management tools
time_fields = ['rep_date','begin','end']
for t in time_fields:
    new_field = f"{t}2"
    arcpy.management.AddField(tt,new_field,"TEXT")
    arcpy.management.CalculateField(tt,new_field,f"!{t}!.strftime('%Y/%m/%d %H:%m')", "PYTHON3")

# This will show the new fields
#fn = [f.name for f in arcpy.ListFields(tt)]

When you create a new project, it automatically creates a geodatabase file to go along with that project. If you just want a standalone geodatabase though, you can use something like this in your python script:

import arcpy
import os

gdb = "./project/LEO_Tables.gdb"

if os.path.exists(gdb):
    pass
else:
    loc, db = os.path.split(gdb)
    arcpy.management.CreateFileGDB(loc,db)

So if the geodatabase does not exist, it creates it. If it does exist though, it will not worry about creating a new one.


One of the examples for automation is taking a basemap, updating some of the elements, and then exporting that map to an image or PDF. This sample code, using Dallas data, shows how to set up a project to do this. And here is the original map:

Because ArgGIS has so many different elements, the arcpy module tends to be quite difficult to navigate. Basically I try to seperate out data processing (which often takes inputs and outputs them into a geodatabase) vs visual things on a map. So to do this project, you have step 1 import data into a geodatabase, and 2 update the map elements. Here legend, title, copying symbology, etc.

You can go to the github project to download all of the data (including the aprx project file, as well as the geodatabase file). But here is the code to review.

import arcpy
import pandas as pd
from arcgis.features import GeoAccessor, GeoSeriesAccessor
import os

# Set environment to a particular project
gdb = "DallasDB.gdb"
ct = "TempCrimes"
ol = "ExampleCrimes"
nc = "New Crimes"
arcpy.env.workspace = gdb
aprx = arcpy.mp.ArcGISProject("DallasExample.aprx")
dallas_map = aprx.listMaps('DallasMap')[0]
temp_layer = f"{gdb}/{ct}"

# Load in data, set as a spatial dataframe
df = pd.read_csv('DallasSample.csv') # for a real project, will prob query your RMS
df = df[['incidentnum','lon','lat']]
sdf = pd.DataFrame.spatial.from_xy(df,'lon','lat', sr=4326)

# Add the feature class to the map, note this does not like missing data
sdf.spatial.to_featureclass(location=temp_layer)
dallas_map.addDataFromPath(os.path.abspath(temp_layer)) # it wants the abs path for this

# Get the layers, copy symbology from old to new
new_layer = dallas_map.listLayers(ct)[0]
old_layer = dallas_map.listLayers(ol)[0]
old_layer.visible = False
new_layer.symbology = old_layer.symbology
new_layer.name = nc

# Add into the legend, moving to top
layout = aprx.listLayouts("DallasLayout")[0]
leg = layout.listElements("LEGEND_ELEMENT")[0]
item_di = {f.name:f for f in leg.items}
leg.moveItem(item_di['Dallas PD Divisions'], item_di[nc], move_position='BEFORE')

# Update title in layout "TitleText"
txt = layout.listElements("TEXT_ELEMENT")
txt_di = {f.name:f for f in txt}
txt_di['TitleText'].text = "New Title"
# If you need to make larger, can do
#txt_di['TitleText'].elementWidth = 2.0

# Export to high res PNG file
layout.exportToPNG("DallasUpdate.png",resolution=500)

# Cleaning up, to delete the file in geodatabase, need to remove from map
dallas_map.removeLayer(new_layer)
arcpy.management.Delete(ct)

And here is the updated map:

Some notes on ESRI server APIs

Just a few years ago, most cities open data sites were dominated by Socrata services. More recently though cities have turned to ArcGIS servers to disseminate not only GIS data, but also just plain tabular data. This post is to collate my notes on querying ESRI’s APIs for these services. They are quite fast, have very generous return limits, and have the ability to do filtering/aggregation.

So first lets start with Raleigh’s Open Data site, specifically the Police Incidents. So sometimes for data analysis you just want a point-in-time dataset, and can download 100% of the data (which you can do here, see the Download button in the below screenshot). But what I am going to show here is how to format queries to generate up to date information. This is useful in web-applications, like dashboards.

So first, go down to the Blue button in the below screen that says I want to use this:

Once you click that, you will see a screen that lists several different options, click to expand the View API Resources, and then click the link open in API explorer:

To save a few steps, here is the original link and the API link side by side, you can see you just need to change explore to api in the url:

https://data-ral.opendata.arcgis.com/datasets/ral::daily-raleigh-police-incidents/explore
https://data-ral.opendata.arcgis.com/datasets/ral::daily-raleigh-police-incidents/api

Now on this page, it has a form to be able to fill in a query, but first check out the Query URL string on the right:

I am going to go into how to modify that URL in a bit to return different slices of data. But first check out the link https://services.arcgis.com/v400IkDOw1ad7Yad/ArcGIS/rest/services

This simpler view I often find easier to see all the available data than the open data websites with the extra fluff. You can often tell the different data sources right from the name (and often cities have more things available than they show on their open data site). But lets go to the Police Incidents Feature Server page, the link is https://services.arcgis.com/v400IkDOw1ad7Yad/ArcGIS/rest/services/Daily_Police_Incidents/FeatureServer/0:

This gives you some meta-data (such as the fields and projection). Scroll down to the bottom of the page, and click the Query button, it will then take you to https://services.arcgis.com/v400IkDOw1ad7Yad/ArcGIS/rest/services/Daily_Police_Incidents/FeatureServer/0/query:

I find this tool to format queries easier than the Open Data site. Here I put in the Where field 1=1, set the Out Fields to *, the Result record count to 3. I then hit the Query (GET)

This gives an annoyingly long url. And here are the resulting images

So although this returns a very long url, most of the parameters in the url are empty. So you could have a more minimal url of https://services.arcgis.com/v400IkDOw1ad7Yad/ArcGIS/rest/services/Daily_Police_Incidents/FeatureServer/0/query?where=1%3D1&outFields=*&resultRecordCount=3&f=json. (There I changed the format to json as well.)

In python, it is easier to work with the json or geojson output. So here I show how to query the data, and read it into a geopandas dataframe.

from io import StringIO
import geopandas as gpd
import requests

base = "https://services.arcgis.com/v400IkDOw1ad7Yad/ArcGIS/rest/services/Daily_Police_Incidents/FeatureServer/0/query"
params = {"where": "1=1",
          "outFields": "*",
          "resultRecordCount": "3",
          "f": "geojson"}
res = requests.get(base,params)
gdf = gpd.read_file(StringIO(res.text)) # note I do not use res.json()

Now, the ESRI servers will not return a dataset that has 1,000,000 rows, it limits the outputs. I have a gnarly function I have built over the years to do the pagination, fall back to json if geojson is not available, etc. Left otherwise uncommented.

from datetime import datetime
import geopandas as gpd
import numpy as np
import pandas as pd
import requests
import time
from urllib.parse import quote

def query_esri(base='https://services.arcgis.com/v400IkDOw1ad7Yad/arcgis/rest/services/Police_Incidents/FeatureServer/0/query',
               params={'outFields':"*",'where':"1=1"},
               verbose=False,
               limitSize=None,
               gpd_query=False,
               sleep=1):
    if verbose:
        print(f'Starting Queries @ {datetime.now()}')
    req = requests
    p2 = params.copy()
    # try geojson first, if fails use normal json
    if 'f' in p2:
        p2_orig_f = p2['f']
    else:
        p2_orig_f = 'geojson'
    p2['f'] = 'geojson'
    fin_url = base + "?"
    amp = ""
    fi = 0
    for key,val in p2.items():
        fin_url += amp + key + "=" + quote(val)
        amp = "&"
    # First, getting the total count
    count_url = fin_url + "&returnCountOnly=true"
    if verbose:
        print(count_url)
    response_count = req.get(count_url)
    # If error, try using json instead of geojson
    if 'error' in response_count.json():
        if verbose:
            print('geojson query failed, going to json')
        p2['f'] = 'json'
        fin_url = fin_url.replace('geojson','json')
        count_url = fin_url + "&returnCountOnly=true"
        response_count2 = req.get(count_url)
        count_n = response_count2.json()['count']
    else:
        try:
            count_n = response_count.json()["properties"]["count"]
        except:
            count_n = response_count.json()['count']
    if verbose:
        print(f'Total count to query is {count_n}')
    # Getting initial query
    if p2_orig_f != 'geojson':
        fin_url = fin_url.replace('geojson',p2_orig_f)
    dat_li = []
    if limitSize:
        fin_url_limit = fin_url + f"&resultRecordCount={limitSize}"
    else:
        fin_url_limit = fin_url
    if gpd_query:
        full_response = gpd.read_file(fin_url_limit)
        dat = full_response
    else:
        full_response = req.get(fin_url_limit)
        dat = gpd.read_file(StringIO(full_response.text))
    # If too big, getting subsequent chunks
    chunk = dat.shape[0]
    if chunk == count_n:
        d2 = dat
    else:
        if verbose:
            print(f'The max chunk size is {chunk:,}, total rows are {count_n:,}')
            print(f'Need to do {np.ceil(count_n/chunk):,.0f} total queries')
        offset = chunk
        dat_li = [dat]
        remaining = count_n - chunk
        while remaining > 0:
            if verbose:
                print(f'Remaining {remaining}, Offset {offset}')
            offset_val = f"&cacheHint=true&resultOffset={offset}&resultRecordCount={chunk}"
            off_url = fin_url + offset_val
            if gpd_query:
                part_response = gpd.read_file(off_url)
                dat_li.append(part_response.copy())
            else:
                part_response = req.get(off_url)
                dat_li.append(gpd.read_file(StringIO(part_response.text)))
            offset += chunk
            remaining -= chunk
            time.sleep(sleep)
        d2 = pd.concat(dat_li,ignore_index=True)
    if verbose:
        print(f'Finished queries @ {datetime.now()}')
    # checking to make sure numbers are correct
    if d2.shape[0] != count_n:
        print('Warning! Total count {count_n} is different than queried count {d2.shape[0]}')
    # if geojson, just return
    if p2['f'] == 'geojson':
        return d2
    # if json, can drop geometry column
    elif p2['f'] == 'json':
        if 'geometry' in list(d2):
            return d2.drop(columns='geometry')
        else:
            return d2

And so, to get the entire dataset of crime data in Raleigh, it is then df = query_esri(verbose=True). It is pretty large, so I show here limiting the query.

params = {'where': "reported_date >= CAST('1/1/2025' AS DATE)", 
          'outFields': '*'}
df = query_esri(base=base,params=params,verbose=True)

Here this shows doing a datetime comparison, by casting the input to a date. Sometimes you have to do the opposite, cast one of the text fields to dates or extract out values from a date field represented as text.

Example Queries

So I showed about you can do a WHERE clause in the queries. You can do other stuff as well, such as get aggregate counts. For example, here is a query that shows how to get aggregate statistics.

If you click the link, it will go to the query form ESRI webpage. And that form shows how to enter in the output statistics fields.

And this produces counts of the total crimes in the database.

Here are a few additional examples I have saved in my notes:

Do not use the query_esri function above for aggregate counts, just form the params and pass them into requests directly. The query_esri function is meant to return large sets of individual rows, and so can overwrite the params in unexpected way.

Check out my Crime De-Coder LinkedIn page this week for other examples of using python + ESRI. This is more for public data, but those will be examples of using arcpy in different production scenarios. Later this week I will also post an updated blog here, for the LLMs to consume.

LinkedIn is the best social media site

The end goals I want for a social media site are:

  • promote my work
  • see other peoples work

Social media for other people may have other uses. I do comment and have minor interactions on the social media sites, but I do not use them primarily for that. So my context is more business oriented (I do not have Facebook, and have not considered it). I participate some on Reddit as well, but that is pretty sparingly.

LinkedIn is the best for both relative to X and BlueSky currently. So I encourage folks with my same interests to migrate to LinkedIn.

LinkedIn

So I started Crime De-Coder around 2 years ago. I first created a website, and then second started a LinkedIn page.

When I first created the business page, I invited most of my criminal justice contacts to follow the page. I had maybe 500 followers just based on that first wave of invites. At first I posted once or twice a week, and it was very steady growth, and grew to over 1500 followers in maybe just a month or two.

Now, LinkedIn has a reputation for more spammy lifecoach self promotion (for lack of a better description). I intentionally try to post somewhat technical material, but keep it brief and understandable. It is mostly things I am working on that I think will be of interest to crime analysts or the general academic community. Here is one of my recent posts on structured outputs:

Current follower count on LinkedIn for my business page (which in retrospect may have been a mistake, I think they promote business pages less than personal pages), is 3230, and I have fairly consistent growth of a few new followers per day.

I first started posting once a week, and with additional growth expanded to once every other day and at one point once a day. I have cut back recently (mostly just due to time). I did get more engagement, around 1000+ views per day when I was posting every day.

Probably the most important part though of advertising Crime De-Coder is the types of views I am getting. My followers are not just academic colleagues I was previously friends with, it is a decent outside my first degree network of police officers and other non-profit related folks. I have landed several contracts where I know those individuals reached out to me based on my LinkedIn posting. It could be higher, as my personal Crime De-Coder website ranks very poorly on Bing search, but my LinkedIn posts come up fairly high.

When I was first on Twitter I did have a few academic collaborations that I am not sure would have happened without it (a paper with Manne Gerell, and a paper with Gio Circo, although I had met Gio in real life before that). I do not remember getting any actual consulting work though.

I mentioned it is not only better for me for advertising my work, but also consuming other material. I did a quick experiment, just opened the home page and scrolled the first 3 non-advertisement posts on LinkedIn, X, and BlueSky. For LinkedIn

This is likely a person I do not want anything to do with, but their comment I agree with. Whenever I use Service Now at my day job I want to rage quit (just send a Teams chat or email and be done with it, LLMs can do smarter routing anymore). The next two are people are I am directly connected with. Some snark by Nick Selby (which I can understand the sentiment, albeit disagree with, I will not bother to comment though). And something posted by Mindy Duong I likely would be interested in:

Then another advert, and then a post by Chief Patterson of Raleigh, whom I am not directly connected with, but was liked by Tamara Herold and Jamie Vaske (whom I am connected with).

So annoying for the adverts, but the suggested (which the feeds are weird now, they are not chronological) are not bad. I would prefer if LinkedIn had a “general” and “my friends” sections, but overall I am happier with the content I see on LinkedIn than I am the other sites.

X & BlueSky

I first created a personal then Twitter account in 2018. Nadine Connell suggested it, and it was nice then. When I first joined I think it was Cory Haberman tweeted and said to follow my work, and I had a few hundred followers that first day. Then over the next two years, just posting blog posts and papers for the most part, I grew to over 1500 followers IIRC. I also consumed quite a bit of content from criminal justice colleagues. It was much more academic focused, but it was a very good source of recent research, CJ relevant news and content.

I then eventually deleted the Twitter account, due to a colleague being upset I liked a tweet. To be clear, the colleague was upset but it wasn’t a very big deal, I just did not want to deal with it.

I started a Crime De-Coder X account last year. I made an account to watch the Trump interview, and just decided to roll with it. I tried really hard to make X work – I posted daily, the same stuff I had been sharing on LinkedIn, just shorter form. After 4 months, I have 139 followers (again, when I joined Twitter in 2018 I had more than that on day 1). And some of those followers are porn accounts or bots. Majority of my posts get <=1 like and 0 reposts. It just hasn’t resulted in getting my work out there the same way in 2018 or on LinkedIn now.

So in terms of sharing work, the more recent X has been a bust. In terms of viewing other work, my X feed is dominated by short form video content (a mimic of TikTok) I don’t really care about. This is after extensively blocking/muting/saying I don’t like a lot of content. I promise I tried really hard to make X work.

So when I open up the Twitter home feed, it is two videos by Musk:

Then a thread by Per-Olof (whom I follow), and then another short video Death App joke:

So I thought this was satire, but clicking that fellows posts I think he may actually be involved in promoting that app. I don’t know, but I don’t want any part of it.

BlueSky I have not been on as long, but given how easy it was to get started on Twitter and X, I am not going to worry about posting so much. I have 43 followers, and posts similar to X have basically been zero interaction for the most part. The content feed is different than X, but is still not something I care that much about.

We have Jeff Asher and his football takes:

I am connected with Jeff on LinkedIn, in which he only posts his technical material. So if you want to hear Jeff’s takes on football and UT-Austin stuff then go ahead and follow him on BlueSky. Then we have a promotional post by a psychologist (this person I likely would be interested in following his work, this particular post though is not very interesting). And a not funny Onion like post?

Then Gavin Hales, whom I follow, and typically shares good content. And another post I leave with no comment.

My BlueSky feed is mostly dominated by folks in the UK currently. It could be good, but it currently just does not have the uptake to make it worth it like I had with Twitter in 2018. It may be the case given my different goals, to advertise my consulting business, Twitter in 2018 would not be good either though.

So for folks who subscribe to this blog, I highly suggest to give LinkedIn a try for your social media consumption and sharing.

How much do students pay for textbooks at GSU?

Given I am a big proponent of open data, replicable scientific results, and open access publishing, I struck up a friendship with Scott Jacques at Georgia State University. One of the projects we pursued was a pretty simple, but could potentially save students a ton of money. If you have checked out your universities online library system recently, you may have noticed they have digital books (mostly from academic presses) that you can just read. No limits like the local library, they are just available to all students.

So the idea Scott had was identify books students are paying for, and then see if the library can negotiate with the publisher to have it for all students. This shifts the cost from the student to the university, but the licensing fees for the books are not that large (think less than $1000). This can save money especially if it is a class with many students, so say a $30 book with 100 students, that is $3000 students are ponying up in toto.

To do this we would need course enrollments and the books they are having students buy. Of course, this is data that does exist, but I knew going in that it was just not going to happen that someone just nicely gave us a spreadsheet of data. So I set about to scrape the data, you can see that work on Github if you care too.

The github repo in the data folder has fall 2024 and spring 2025 Excel spreadsheets if you want to see the data. I also have a filterable dashboard on my crime de-coder site.

You can filter for specific colleges, look up individual books, etc. (This is a preliminary dashboard that has a few kinks, if you get too sick of the filtering acting wonky I would suggest just downloading the Excel spreadsheets.)

One of the aspects though of doing this analysis, the types of academic publishers me and Scott set out to identify are pretty small fish. The largest happen to be Academic textbook publishers (like Pearson and McGraw Hill). The biggest, coming in at over $300,000 students spend on in a year is a Pearson text on Algebra.

You may wonder why so many students are buying an algebra book. It is assigned across the Pre-calculus courses. GSU is a predominantly low income serving institution, with the majority of students on Pell grants. Those students at least will get their textbooks reimbursed via the Pell grants (at least before the grant money runs out).

Being a former professor, these course bundles in my area (criminal justice) were comically poor quality. I accede the math ones could be higher quality, I have not purchased this one specifically, but this offers two solutions. One, Universities should directly contract with Pearson to buy licensing for the materials at a discount. The bookstore prices are often slightly higher than just buying from other sources (Pearson or Amazon) directly. (Students on Pell Grants need to buy from the bookstore though to be reimbursed.)

A second option is simply to pay someone to create open access materials to swap out. Universities often have an option for taking a sabbatical to write a text book. I am pretty sure GSU could throw 30k at an adjunct and they would write just as high (if not higher) quality material. For basic material like that, the current LLM tools could help speed the process by quite a bit.

For these types of textbooks, professors use them because they are convenient, so if a lower cost option were available that met the same needs, I am pretty sure you could convince the math department to have those materials as the standard. If we go to page two of the dashboard though, we see some new types of books pop up:

You may wonder, what is Conley Smith Publishing? It happens to be an idiosyncratic self publishing platform. Look, I have a self published book as well, but having 800 business students a semester buy your self published $100 using excel book, that is just a racket. And it is a racket that when I give that example to friends almost everyone has experienced in their college career.

There is no solution to the latter professors ripping off their students. It is not illegal as far as I’m aware. I am just guessing at the margins, that business prof is maybe making $30k bonus a semester forcing their students to buy their textbook. Unlike the academic textbook scenario, this individual will not swap out with materials, even if the alternative materials are higher quality.

To solve the issue will take senior administration in universities caring that professors are gouging their (mostly low income) students and put a stop to it.

This is not a unique problem to GSU, this is a problem at all universities. Universities could aim to make low/no-cost, and use that as advertisement. This should be particularly effective advertisement for low income serving universities.

If you are interested in a similar analysis for your own university, feel free to get in touch with either myself or Scott. We would like to expand our cost saving projects beyond GSU.

The story of my dissertation

My dissertation is freely available to read on my website (Wheeler, 2015). I still open up my hardcover I purchased every now and then. No one cites it, because no one reads dissertations, but it is easily the work I am the most proud of.

Most of the articles I write there is some motivating story behind the work you would never know about just from reading the words. I think this is important, as the story often is tied to some more fundamental problem, which solving specific problems is the main way we make progress in science. The stifling way that academics write peer reviewed papers currently doesn’t allow that extra narrative in.

For example, my first article (and what ended up being my masters thesis, Albany at that time you could go directly into PhD from undergrad and get your masters on the way), was an article about the journey to crime after people move (Wheeler, 2012). The story behind that paper was, while working at the Finn Institute, Syracuse PD was interested in targeted enforcement of chronic offenders, many of whom drive around without licenses. I thought, why not look at the journey to crime to see where they are likely driving. When I did that analysis, I noticed a few hundred chronic offenders had something like a 5 fold number of home addresses in the sample. (If you are still wanting to know where they drive, they drive everywhere, chronic offenders have very wide spatial footprints.)

Part of the motivation behind that paper was if people move all the time, how can their home matter? They don’t really have a home. This is a good segue into the motivation of the dissertation.

More of my academic reading at that point had been on macro and neighborhood influences on crime. (Forgive me, as I am likely to get some of the timing wrong in my memory, but this writing is as best as I remember it.) I had a class with Colin Loftin that I do not remember the name of, but discussed things like the southern culture of violence, Rob Sampson’s work on neighborhoods and crime, and likely other macro work I cannot remember. Sampson’s work in Chicago made the biggest impression on me. I have a scanned copy of Shaw & McKay’s Juvenile Delinquency (2nd edition). I also took a spatial statistics class with Glenn Deane in the sociology department, and the major focus of the course was on areal units.

When thinking about the dissertation topic, the only advice I remember receiving was about scope. Shawn Bushway at one point told me about a stapler thesis (three independent papers bundled into a single dissertation). I just wanted something big, something important. I intentionally sought out to try to answer some more fundamental question.

So I had the first inkling of “how can neighborhoods matter if people don’t consistently live in the same neighborhood”? The second was that my work at the Finn Institute working with police departments, hot spots were the only thing any police department cared about. It is not uncommon even now for an academic to fit a spatial model at the neighborhood level to crime and demographics, and have a throwaway paragraph in the discussion about how it would help police better allocate resources. It is comically absurd – you can just count up crimes at addresses or street segments and rank them and that will be a much more accurate and precise system (no demographics needed).

So I wanted to do work on micro level units of analysis. But I had on my dissertation Glenn and Colin – people very interested in macro and some neighborhood level processes. So I would need to justify looking at small units of analysis. Reading the literature, Weisburd and Sherman did not have to me clearly articulated reasons to be interested in micro places, beyond just utility for police. Sherman had the paper counting up crimes at addresses (Sherman et al., 1989), and none of Weisburd’s work had to me any clear causal reasoning to look at micro places to explain crime.

To be clear wanting to look at small units as the only guidepost in choosing a topic is a terrible place to start. You should start from a more specific, articulable problem you wish to solve. (If others pursuing Phds are reading.) But I did not have that level of clarity in my thinking at the time.

So I set out to articulate a reason why we would be interested to look at micro level areas that I thought would satisfy Glenn and Colin. I started out thinking about doing a simulation study, similar to what Stan Openshaw did (1984) that was motivated by Robinson’s (1950) ecological fallacy. While doing that I realized there was no point in doing the simulation, you could figure it out all in closed form (as have others before me). So I proved that random spatial aggregation would not result in the ecological fallacy, but aggregating nearby spatial areas would, assuming there is a spatial covariance between nearby areas. I thought at the time it was a novel proof – it was not (Footnote 1 on page 9 were all things I read after this). Even now the Wikipedia page on the ecological fallacy has an unsourced overview of the issue, that cross-spatial correlations make the micro and macro equations not equal.

This in and of itself is not interesting, but in the process did clearly articulate to me why you want to look at micro units. The example I like to give is as follows – imagine you have a bar you think causes crime. The bar can cause crime inside the bar, as well as the bar diffusing risk into the nearby area. Think people getting in fights in the bar, vs people being robbed walking away from a night of drinking. If you aggregate to large units of analysis, you cannot distinguish between “inside bar crime” vs “outside bar crime”. So that is a clear causal reasoning for when you want to look at particular units of analysis – the ability to estimate diffusion/displacement effects are highly dependent on the spatial unit of analysis. If you have an intervention that is “make the bar hire better security” (ala John Eck’s work) that should likely not have any impact outside the bar, only inside the bar. So local vs diffusion effects are not entirely academic, they can have specific real world implications.

This logic does not explicitly always value smaller spatial units of analysis though. Another example I liked to give is say you are evaluating a city wide gun buy back. You could look at more micro areas than the entire city, e.g. see if it decreased in neighborhood A and increased in neighborhood B, but it likely does not invalidate the macro city wide analysis. Which is just an aggregate estimate over the entire city – which in some cases is preferable.

Glenn Deane at some point told me that I am a reductionist, which was the first time I heard that word, but it did encapsulate my thinking. You could always go smaller, there is no atom to stop at. But often it just doesn’t matter – you could examine the differences in crime between the front stoop and the back porch, but there is not likely meaningful causal reasons to do so. This logic works for temporal aggregation and aggregating different crime types as well.

I would need to reread Great American city, but I did not take this to be necessarily contradictory to Sampson’s work on neighborhood processes. Rob came to SUNY Albany to give a talk at the sociology department (I don’t remember the year). Glenn invited me to whatever they were doing after the talk, and being a hillbilly I said I need to go back to work at DCJS, I am on my lunch break. (To be clear, no one at DCJS would have cared.) I am sure I would have not been able to articulate anything of importance to him, but I do wish I had taken that opportunity in retrospect.

So with the knowledge of how aggregation bias occurs in hand, I had formulated a few different empirical research projects. One was the idea behind bars and crime I have already given an example of. I had a few interesting findings, one of which is that diffusion effects are larger than the local effects. I also estimated the bias of bars selecting into high crime areas via a non-equivalent dependent variable design – the only time I have used a DAG in any of my work.

I gave a job talk at Florida State before the dissertation was finished. I had this idea in the hotel room the night before my talk. It was a terrible idea to add it to my talk, and I did not prepare what I was going to say sufficiently, so it came out like a jumbled mess. I am not sure whether I would want to remember or forget that series of events (which include me asking Ted Chiricos if you can fish in the Gulf of Mexico at dinner, I feel I am OK in one-on-one chats, group dinners I am more awkward than you can possibly imagine). It also included nice discussions though, Dan Mear’s asked me a question about emergent macro phenomenon that I did not have a good answer to at the time, but now I would say simple causal processes having emergent phenomenon is a reason to look at micro, not the macro. Eric Stewart asked me if there is any reason to look at neighborhood and I said no at the time, but I should have said my example gun buy back analogy.

The second empirical study I took from broken windows theory (Kelling & Wilson, 1982). So the majority of social science theories some spatial diffusion is to be expected. Broken windows theory though had a very clear spatial hypothesis – you need to see disorder for it to impact your behavior. So you do not expect spatial diffusion, beyond someones line of site, to occur. To measure disorder, I used 311 calls (I had this idea before I read Dan O’Brien’s work, see my prospectus, but Dan published his work on the topic shortly thereafter, O’Brien et al. 2015).

I confirmed this to be the case, conditional on controlling for neighborhood effects. I also discuss how if the underlying process is smooth, using discrete neighborhood boundaries can result in negative spatial autocorrelation, which I show some evidence of as well.

This suggests that using a smooth measure of neighborhoods, like Hipp’s idea of egohoods (Hipp et al., 2013), I think is probably more reasonable than discrete neighborhood boundaries (which are often quite arbitrary).

While I ended up publishing those two empirical applications (Wheeler, 2018; 2019), which was hard, I was too defeated to even worry about posting a more specific paper on the aggregation idea. (I think I submitted this paper to Criminology, but it was not well received.) I was partially burned out from the bars and crime paper, which went at least one R&R at Criminology and was still rejected. And then I went through four rejections for the 311 paper. I had at that point multiple other papers that took years to publish. It is a slog and degrading to be rejected so much.

But that is really my only substantive contribution to theoretical criminology in any guise. After the dissertation, I just focused on either policy work or engineering/method applications. Which are much easier to publish.

References