LinkedIn is the best social media site

The end goals I want for a social media site are:

  • promote my work
  • see other peoples work

Social media for other people may have other uses. I do comment and have minor interactions on the social media sites, but I do not use them primarily for that. So my context is more business oriented (I do not have Facebook, and have not considered it). I participate some on Reddit as well, but that is pretty sparingly.

LinkedIn is the best for both relative to X and BlueSky currently. So I encourage folks with my same interests to migrate to LinkedIn.

LinkedIn

So I started Crime De-Coder around 2 years ago. I first created a website, and then second started a LinkedIn page.

When I first created the business page, I invited most of my criminal justice contacts to follow the page. I had maybe 500 followers just based on that first wave of invites. At first I posted once or twice a week, and it was very steady growth, and grew to over 1500 followers in maybe just a month or two.

Now, LinkedIn has a reputation for more spammy lifecoach self promotion (for lack of a better description). I intentionally try to post somewhat technical material, but keep it brief and understandable. It is mostly things I am working on that I think will be of interest to crime analysts or the general academic community. Here is one of my recent posts on structured outputs:

Current follower count on LinkedIn for my business page (which in retrospect may have been a mistake, I think they promote business pages less than personal pages), is 3230, and I have fairly consistent growth of a few new followers per day.

I first started posting once a week, and with additional growth expanded to once every other day and at one point once a day. I have cut back recently (mostly just due to time). I did get more engagement, around 1000+ views per day when I was posting every day.

Probably the most important part though of advertising Crime De-Coder is the types of views I am getting. My followers are not just academic colleagues I was previously friends with, it is a decent outside my first degree network of police officers and other non-profit related folks. I have landed several contracts where I know those individuals reached out to me based on my LinkedIn posting. It could be higher, as my personal Crime De-Coder website ranks very poorly on Bing search, but my LinkedIn posts come up fairly high.

When I was first on Twitter I did have a few academic collaborations that I am not sure would have happened without it (a paper with Manne Gerell, and a paper with Gio Circo, although I had met Gio in real life before that). I do not remember getting any actual consulting work though.

I mentioned it is not only better for me for advertising my work, but also consuming other material. I did a quick experiment, just opened the home page and scrolled the first 3 non-advertisement posts on LinkedIn, X, and BlueSky. For LinkedIn

This is likely a person I do not want anything to do with, but their comment I agree with. Whenever I use Service Now at my day job I want to rage quit (just send a Teams chat or email and be done with it, LLMs can do smarter routing anymore). The next two are people are I am directly connected with. Some snark by Nick Selby (which I can understand the sentiment, albeit disagree with, I will not bother to comment though). And something posted by Mindy Duong I likely would be interested in:

Then another advert, and then a post by Chief Patterson of Raleigh, whom I am not directly connected with, but was liked by Tamara Herold and Jamie Vaske (whom I am connected with).

So annoying for the adverts, but the suggested (which the feeds are weird now, they are not chronological) are not bad. I would prefer if LinkedIn had a “general” and “my friends” sections, but overall I am happier with the content I see on LinkedIn than I am the other sites.

X & BlueSky

I first created a personal then Twitter account in 2018. Nadine Connell suggested it, and it was nice then. When I first joined I think it was Cory Haberman tweeted and said to follow my work, and I had a few hundred followers that first day. Then over the next two years, just posting blog posts and papers for the most part, I grew to over 1500 followers IIRC. I also consumed quite a bit of content from criminal justice colleagues. It was much more academic focused, but it was a very good source of recent research, CJ relevant news and content.

I then eventually deleted the Twitter account, due to a colleague being upset I liked a tweet. To be clear, the colleague was upset but it wasn’t a very big deal, I just did not want to deal with it.

I started a Crime De-Coder X account last year. I made an account to watch the Trump interview, and just decided to roll with it. I tried really hard to make X work – I posted daily, the same stuff I had been sharing on LinkedIn, just shorter form. After 4 months, I have 139 followers (again, when I joined Twitter in 2018 I had more than that on day 1). And some of those followers are porn accounts or bots. Majority of my posts get <=1 like and 0 reposts. It just hasn’t resulted in getting my work out there the same way in 2018 or on LinkedIn now.

So in terms of sharing work, the more recent X has been a bust. In terms of viewing other work, my X feed is dominated by short form video content (a mimic of TikTok) I don’t really care about. This is after extensively blocking/muting/saying I don’t like a lot of content. I promise I tried really hard to make X work.

So when I open up the Twitter home feed, it is two videos by Musk:

Then a thread by Per-Olof (whom I follow), and then another short video Death App joke:

So I thought this was satire, but clicking that fellows posts I think he may actually be involved in promoting that app. I don’t know, but I don’t want any part of it.

BlueSky I have not been on as long, but given how easy it was to get started on Twitter and X, I am not going to worry about posting so much. I have 43 followers, and posts similar to X have basically been zero interaction for the most part. The content feed is different than X, but is still not something I care that much about.

We have Jeff Asher and his football takes:

I am connected with Jeff on LinkedIn, in which he only posts his technical material. So if you want to hear Jeff’s takes on football and UT-Austin stuff then go ahead and follow him on BlueSky. Then we have a promotional post by a psychologist (this person I likely would be interested in following his work, this particular post though is not very interesting). And a not funny Onion like post?

Then Gavin Hales, whom I follow, and typically shares good content. And another post I leave with no comment.

My BlueSky feed is mostly dominated by folks in the UK currently. It could be good, but it currently just does not have the uptake to make it worth it like I had with Twitter in 2018. It may be the case given my different goals, to advertise my consulting business, Twitter in 2018 would not be good either though.

So for folks who subscribe to this blog, I highly suggest to give LinkedIn a try for your social media consumption and sharing.

How much do students pay for textbooks at GSU?

Given I am a big proponent of open data, replicable scientific results, and open access publishing, I struck up a friendship with Scott Jacques at Georgia State University. One of the projects we pursued was a pretty simple, but could potentially save students a ton of money. If you have checked out your universities online library system recently, you may have noticed they have digital books (mostly from academic presses) that you can just read. No limits like the local library, they are just available to all students.

So the idea Scott had was identify books students are paying for, and then see if the library can negotiate with the publisher to have it for all students. This shifts the cost from the student to the university, but the licensing fees for the books are not that large (think less than $1000). This can save money especially if it is a class with many students, so say a $30 book with 100 students, that is $3000 students are ponying up in toto.

To do this we would need course enrollments and the books they are having students buy. Of course, this is data that does exist, but I knew going in that it was just not going to happen that someone just nicely gave us a spreadsheet of data. So I set about to scrape the data, you can see that work on Github if you care too.

The github repo in the data folder has fall 2024 and spring 2025 Excel spreadsheets if you want to see the data. I also have a filterable dashboard on my crime de-coder site.

You can filter for specific colleges, look up individual books, etc. (This is a preliminary dashboard that has a few kinks, if you get too sick of the filtering acting wonky I would suggest just downloading the Excel spreadsheets.)

One of the aspects though of doing this analysis, the types of academic publishers me and Scott set out to identify are pretty small fish. The largest happen to be Academic textbook publishers (like Pearson and McGraw Hill). The biggest, coming in at over $300,000 students spend on in a year is a Pearson text on Algebra.

You may wonder why so many students are buying an algebra book. It is assigned across the Pre-calculus courses. GSU is a predominantly low income serving institution, with the majority of students on Pell grants. Those students at least will get their textbooks reimbursed via the Pell grants (at least before the grant money runs out).

Being a former professor, these course bundles in my area (criminal justice) were comically poor quality. I accede the math ones could be higher quality, I have not purchased this one specifically, but this offers two solutions. One, Universities should directly contract with Pearson to buy licensing for the materials at a discount. The bookstore prices are often slightly higher than just buying from other sources (Pearson or Amazon) directly. (Students on Pell Grants need to buy from the bookstore though to be reimbursed.)

A second option is simply to pay someone to create open access materials to swap out. Universities often have an option for taking a sabbatical to write a text book. I am pretty sure GSU could throw 30k at an adjunct and they would write just as high (if not higher) quality material. For basic material like that, the current LLM tools could help speed the process by quite a bit.

For these types of textbooks, professors use them because they are convenient, so if a lower cost option were available that met the same needs, I am pretty sure you could convince the math department to have those materials as the standard. If we go to page two of the dashboard though, we see some new types of books pop up:

You may wonder, what is Conley Smith Publishing? It happens to be an idiosyncratic self publishing platform. Look, I have a self published book as well, but having 800 business students a semester buy your self published $100 using excel book, that is just a racket. And it is a racket that when I give that example to friends almost everyone has experienced in their college career.

There is no solution to the latter professors ripping off their students. It is not illegal as far as I’m aware. I am just guessing at the margins, that business prof is maybe making $30k bonus a semester forcing their students to buy their textbook. Unlike the academic textbook scenario, this individual will not swap out with materials, even if the alternative materials are higher quality.

To solve the issue will take senior administration in universities caring that professors are gouging their (mostly low income) students and put a stop to it.

This is not a unique problem to GSU, this is a problem at all universities. Universities could aim to make low/no-cost, and use that as advertisement. This should be particularly effective advertisement for low income serving universities.

If you are interested in a similar analysis for your own university, feel free to get in touch with either myself or Scott. We would like to expand our cost saving projects beyond GSU.

The story of my dissertation

My dissertation is freely available to read on my website (Wheeler, 2015). I still open up my hardcover I purchased every now and then. No one cites it, because no one reads dissertations, but it is easily the work I am the most proud of.

Most of the articles I write there is some motivating story behind the work you would never know about just from reading the words. I think this is important, as the story often is tied to some more fundamental problem, which solving specific problems is the main way we make progress in science. The stifling way that academics write peer reviewed papers currently doesn’t allow that extra narrative in.

For example, my first article (and what ended up being my masters thesis, Albany at that time you could go directly into PhD from undergrad and get your masters on the way), was an article about the journey to crime after people move (Wheeler, 2012). The story behind that paper was, while working at the Finn Institute, Syracuse PD was interested in targeted enforcement of chronic offenders, many of whom drive around without licenses. I thought, why not look at the journey to crime to see where they are likely driving. When I did that analysis, I noticed a few hundred chronic offenders had something like a 5 fold number of home addresses in the sample. (If you are still wanting to know where they drive, they drive everywhere, chronic offenders have very wide spatial footprints.)

Part of the motivation behind that paper was if people move all the time, how can their home matter? They don’t really have a home. This is a good segue into the motivation of the dissertation.

More of my academic reading at that point had been on macro and neighborhood influences on crime. (Forgive me, as I am likely to get some of the timing wrong in my memory, but this writing is as best as I remember it.) I had a class with Colin Loftin that I do not remember the name of, but discussed things like the southern culture of violence, Rob Sampson’s work on neighborhoods and crime, and likely other macro work I cannot remember. Sampson’s work in Chicago made the biggest impression on me. I have a scanned copy of Shaw & McKay’s Juvenile Delinquency (2nd edition). I also took a spatial statistics class with Glenn Deane in the sociology department, and the major focus of the course was on areal units.

When thinking about the dissertation topic, the only advice I remember receiving was about scope. Shawn Bushway at one point told me about a stapler thesis (three independent papers bundled into a single dissertation). I just wanted something big, something important. I intentionally sought out to try to answer some more fundamental question.

So I had the first inkling of “how can neighborhoods matter if people don’t consistently live in the same neighborhood”? The second was that my work at the Finn Institute working with police departments, hot spots were the only thing any police department cared about. It is not uncommon even now for an academic to fit a spatial model at the neighborhood level to crime and demographics, and have a throwaway paragraph in the discussion about how it would help police better allocate resources. It is comically absurd – you can just count up crimes at addresses or street segments and rank them and that will be a much more accurate and precise system (no demographics needed).

So I wanted to do work on micro level units of analysis. But I had on my dissertation Glenn and Colin – people very interested in macro and some neighborhood level processes. So I would need to justify looking at small units of analysis. Reading the literature, Weisburd and Sherman did not have to me clearly articulated reasons to be interested in micro places, beyond just utility for police. Sherman had the paper counting up crimes at addresses (Sherman et al., 1989), and none of Weisburd’s work had to me any clear causal reasoning to look at micro places to explain crime.

To be clear wanting to look at small units as the only guidepost in choosing a topic is a terrible place to start. You should start from a more specific, articulable problem you wish to solve. (If others pursuing Phds are reading.) But I did not have that level of clarity in my thinking at the time.

So I set out to articulate a reason why we would be interested to look at micro level areas that I thought would satisfy Glenn and Colin. I started out thinking about doing a simulation study, similar to what Stan Openshaw did (1984) that was motivated by Robinson’s (1950) ecological fallacy. While doing that I realized there was no point in doing the simulation, you could figure it out all in closed form (as have others before me). So I proved that random spatial aggregation would not result in the ecological fallacy, but aggregating nearby spatial areas would, assuming there is a spatial covariance between nearby areas. I thought at the time it was a novel proof – it was not (Footnote 1 on page 9 were all things I read after this). Even now the Wikipedia page on the ecological fallacy has an unsourced overview of the issue, that cross-spatial correlations make the micro and macro equations not equal.

This in and of itself is not interesting, but in the process did clearly articulate to me why you want to look at micro units. The example I like to give is as follows – imagine you have a bar you think causes crime. The bar can cause crime inside the bar, as well as the bar diffusing risk into the nearby area. Think people getting in fights in the bar, vs people being robbed walking away from a night of drinking. If you aggregate to large units of analysis, you cannot distinguish between “inside bar crime” vs “outside bar crime”. So that is a clear causal reasoning for when you want to look at particular units of analysis – the ability to estimate diffusion/displacement effects are highly dependent on the spatial unit of analysis. If you have an intervention that is “make the bar hire better security” (ala John Eck’s work) that should likely not have any impact outside the bar, only inside the bar. So local vs diffusion effects are not entirely academic, they can have specific real world implications.

This logic does not explicitly always value smaller spatial units of analysis though. Another example I liked to give is say you are evaluating a city wide gun buy back. You could look at more micro areas than the entire city, e.g. see if it decreased in neighborhood A and increased in neighborhood B, but it likely does not invalidate the macro city wide analysis. Which is just an aggregate estimate over the entire city – which in some cases is preferable.

Glenn Deane at some point told me that I am a reductionist, which was the first time I heard that word, but it did encapsulate my thinking. You could always go smaller, there is no atom to stop at. But often it just doesn’t matter – you could examine the differences in crime between the front stoop and the back porch, but there is not likely meaningful causal reasons to do so. This logic works for temporal aggregation and aggregating different crime types as well.

I would need to reread Great American city, but I did not take this to be necessarily contradictory to Sampson’s work on neighborhood processes. Rob came to SUNY Albany to give a talk at the sociology department (I don’t remember the year). Glenn invited me to whatever they were doing after the talk, and being a hillbilly I said I need to go back to work at DCJS, I am on my lunch break. (To be clear, no one at DCJS would have cared.) I am sure I would have not been able to articulate anything of importance to him, but I do wish I had taken that opportunity in retrospect.

So with the knowledge of how aggregation bias occurs in hand, I had formulated a few different empirical research projects. One was the idea behind bars and crime I have already given an example of. I had a few interesting findings, one of which is that diffusion effects are larger than the local effects. I also estimated the bias of bars selecting into high crime areas via a non-equivalent dependent variable design – the only time I have used a DAG in any of my work.

I gave a job talk at Florida State before the dissertation was finished. I had this idea in the hotel room the night before my talk. It was a terrible idea to add it to my talk, and I did not prepare what I was going to say sufficiently, so it came out like a jumbled mess. I am not sure whether I would want to remember or forget that series of events (which include me asking Ted Chiricos if you can fish in the Gulf of Mexico at dinner, I feel I am OK in one-on-one chats, group dinners I am more awkward than you can possibly imagine). It also included nice discussions though, Dan Mear’s asked me a question about emergent macro phenomenon that I did not have a good answer to at the time, but now I would say simple causal processes having emergent phenomenon is a reason to look at micro, not the macro. Eric Stewart asked me if there is any reason to look at neighborhood and I said no at the time, but I should have said my example gun buy back analogy.

The second empirical study I took from broken windows theory (Kelling & Wilson, 1982). So the majority of social science theories some spatial diffusion is to be expected. Broken windows theory though had a very clear spatial hypothesis – you need to see disorder for it to impact your behavior. So you do not expect spatial diffusion, beyond someones line of site, to occur. To measure disorder, I used 311 calls (I had this idea before I read Dan O’Brien’s work, see my prospectus, but Dan published his work on the topic shortly thereafter, O’Brien et al. 2015).

I confirmed this to be the case, conditional on controlling for neighborhood effects. I also discuss how if the underlying process is smooth, using discrete neighborhood boundaries can result in negative spatial autocorrelation, which I show some evidence of as well.

This suggests that using a smooth measure of neighborhoods, like Hipp’s idea of egohoods (Hipp et al., 2013), I think is probably more reasonable than discrete neighborhood boundaries (which are often quite arbitrary).

While I ended up publishing those two empirical applications (Wheeler, 2018; 2019), which was hard, I was too defeated to even worry about posting a more specific paper on the aggregation idea. (I think I submitted this paper to Criminology, but it was not well received.) I was partially burned out from the bars and crime paper, which went at least one R&R at Criminology and was still rejected. And then I went through four rejections for the 311 paper. I had at that point multiple other papers that took years to publish. It is a slog and degrading to be rejected so much.

But that is really my only substantive contribution to theoretical criminology in any guise. After the dissertation, I just focused on either policy work or engineering/method applications. Which are much easier to publish.

References

Some things work

A lawyer and economist Megan Stevenson last year released an essay that was essentially “Nothing Works” 2.0. For those non-criminologists, “nothing works” was a report by Robert Martinson in the 1970’s that was critical of the literature on prisoner rehabilitation, and said to date that essentially all attempts at rehabilitation were ineffective.

Martinson was justified. With the benefit of hindsight, most of the studies Martinson critiques were poorly run (in terms of randomization, they almost all were observational self selected into treatment) and had very low statistical power to detect any benefits. (For people based experiments in CJ, think you typically need 1000+ participants, not a few dozen.)

Field experiments are hard and messy, and typically we are talking about “reducing crime by 20%” or “reducing recidivism by 10%” – they are not miracles. You can only actually know if they work using more rigorous designs that were not used at that point in social sciences.

Stevenson does not deny those minor benefits exist, but moves the goalposts to say CJ experiments to date have failed because they do not generate wide spread sweeping change in CJ systems. This is an impossible standard, and is an example of the perfect being the enemy of the good fallacy.

A recent piece by Brandon del Pozo and colleagues critiques Stevenson, which I agree with the majority of what Brandon says. Stevenson’s main critique are not actually with experiments, but more broadly organizational change (which del Pozo is doing various work in now).

Stevenson’s critique is broader than just policing, but I actually would argue that the proactive policing ideals of hotspots and focused deterrence have diffused into the field broadly enough that her points about systematic change are false (at least in those two examples). Those started out as individual projects though, and only diffused through repeated application in a slow process.

As I get older, am I actually more of the engineer mindset that Stevenson is critical of. As a single person, I cannot change the whole world. As a police officer or crime analyst or professor, you cannot change the larger organization you are a part of. You can however do one good thing at a time.

Even if that singular good thing you did is fleeting, it does not make it in vain.

References

GenAI is not a serious solution to California’s homeless problem

This is a rejected op-ed (or at least none of the major papers in California I sent it to bothered to respond and say no-thanks, it could be none of them even looked at it). Might as well post it on personal blog and have a few hundred folks read it.


Recently Gov. Newsom released a letter of interest (LOI) for different tech companies to propose how the state could use GenAI (generative artificial intelligence) to help with California’s homeless problem. The rise in homelessness is a major concern, not only for Californian’s but individuals across the US. That said, the proposal is superficial and likely to be a waste of time.

A simple description of GenAI, for those not aware, are tools to ask the machine questions in text and get a response. So you can ask ChatGPT (a currently popular GenAI tool) something like “how can I write a python function to add two numbers together” and it will dutifully respond with computer code (python is a computer programming language) that answers your question.

As someone who writes code for a living, this is useful, but not magic. Think of it more akin to auto-complete on your phone than something truly intelligent. The stated goals of Newsom’s LOI are either mostly trivial without the help of GenAI, or are hopeless and could never be addressed with GenAI.

For the first stated goal, “connecting people to treatment by better identifying available shelter and treatment beds, with GenAI solutions for a portable tool that the local jurisdictions can use for real-time access to treatment and shelter bed availability”. This is simply describing a database — one could mandate state funded treatment providers to provide this information on a daily basis. The technology infrastructure to accomplish this is not much more complex than making a website. Mandating treatment providers report that information accurately and on a timely basis is the hardest part.

For the second stated goal, “Creating housing with more data and accountability by creating clearer insights into local permitting and development decisions”. Permitting decisions are dictated by the state as well as local ordinances. GenAI solutions will not uncover any suggested solution that most Californian’s don’t already know — housing is too expensive and not enough is being built. This is in part due to the regulatory structure, as well as local zoning opposition for particular projects. GenAI cannot change the state laws.

For the last stated goal of the program, “Supporting the state budget by helping state budget analysts with faster and more efficient policy”. Helping analysts generate results faster is potentially something GenAI can help with, more efficient policy is not. I do not doubt the state analysts can use GenAI solutions to help them write code (the same as I do now). But getting that budget analysis one day quicker will not solve any substantive homeless problem.

I hate to be the bearer of bad news, but there are no easy answers to solve California’s homeless crisis. If a machine could spit out trivial solutions to solve homelessness in a text message, like the Wizard of Oz gifting the Scarecrow brains, it would not be a problem to begin with.

Instead of asking for ambiguous GenAI solutions, the state would be better off thinking more seriously about how they can accomplish those specific tasks mentioned in the LOI. If California actually wants to make a database of treatment availability, that is something they could do right now with their own internal capacity.

Solutions to homelessness are not going to miraculously spew from a GenAI oracle, they are going to come from real people accomplishing specific goals.


If folks are reading this, check out my personal consulting firm, Crime De-Coder. I have experience building real applications. Most of the AI stuff on the market now is pure snake oil, so better to articulate what you specifically want and see if someone can help build that.

Crime De-Coder consulting

Hillbilly Lament

I recently read JD Vance’s Hillbilly Elegy. I grew up in very rural Pennsylvania in the Appalachian mountains, and I am the same age as Vance. So I was interested in hearing his story. My background was different, of course not all rural people are a monolith, but I commiserated with many of his experiences and feelings.

I think it is a good representation of rural life, more so than reading any sociology book. The struggles of rural people are in many ways the same as individuals living in poverty in urban areas. Vance highly empathized with Wilson’s Truly Disadvantaged, and in the end he focuses on cultural behaviors (erosion of core family, domestic violence, drug use). These are not unique to rural culture. Personally I view much of the current state of rural America through Murray’s Bell Curve, which Vance also discusses.

This is not a book review. I will tell some of my stories, and relate them to a bit of what Vance says. I think Murray’s demographic segregation (brain drain) is a better way to frame why rural America looks the way it does than Vance’s focus on cultural norms. This is not a critique of Vance’s perspective though, it is just a different emphasis. I hope you enjoy my story, the same way I enjoyed reading Vance’s story. I think they give a peak into rural life, but they aren’t a magic pill to really understand rural people. I do not even really understand rural people, I can only relate some of my personal experiences and feelings at the time.

It also is not any sort of political endorsement. I do endorse people to read his book – even if you do not like Vance’s current politics the book has what I would consider zero political commentary.

Farming

Where I grew up is more akin to Jackson Kentucky than Middletown Ohio – I grew up in Bradford county Pennsylvania. The town I went to school in has a population of under 2,000 individuals, and my class size was around 80 students. A Subway opened in town when I was a teenager and it was a big deal (the only fast food place in town at the time).

My grandfather on my mother’s side had a small dairy farm. One of the major cultural perspectives I have on rural people is somewhat contra to Vance – farmers have an incredible work ethic. Again this is not a critique of Vance’s book, I could find lazy people like Vance discusses as well. I just knew more farmers.

Farming is Sisyphean – you get up and you milk the cows. It does not matter if you are sick, does not matter if it is raining, does not matter if you are tired. It would be like you not feeding your pets in the morning, and you do not get paid.

There was always an expectation of working. I was doing farm work at an early age. I always remember being in the barn, but even before I was 10 years old I was taught to operate heavy machinery (skidsteer, tractors). So I was doing what I would consider “real work” at that young, not just busy work you give a child to keep them preoccupied.

My grandfather retired and sold his farm when I was 11, but I went and worked for a neighbors farm shortly there after. We had a four-wheeler that I would drive up the road in the wee hours of the morning to go work. I was not an outlier among other kids my age. The modal kid in my school was not a farmer, but there were more than a dozen of my classmates who had the same schedule.

Farming, and manual labor in general, is brutal. When Vance talked about the floor tile company not being able to fill positions (despite the reasonable pay), I can hardly blame people for not taking those jobs. They are quite literally backbreaking.

Farming has become more consolidated/automated over time. The dairy farms I worked on were very small, having less than 100 cows. One of my memories is being exhausted stacking small square bales of hay. There is a machine that bundles up the hay and shoots it onto a wagon. I would be in the wagon stacking the bales. Then we would have to load the bales up a conveyor belt and stack them in the barn. The bales weighed around 50 pounds, it was crazy hard work.

Round bales were only starting to become more common in the area when I was a teenager. You can only move the larger round bales using farm equipment. I believe almost no one does the small square bales like I am describing anymore. The smaller square bales were also a much larger fire hazard. If the hay was wet when baled it would go through a fermentation process and potentially get hot enough in the center of the stack to catch fire.

This is one of the reasons I do not have much worry about automation taking jobs. A farmer not needing to hire extra labor to stack hay and instead use a tractor to do the same work is a good thing. This will apply to many manual labor jobs.

The Culture of Danger

One thing I had always been skeptical of when reading in sociological texts was the southern culture of violence. Saying southern people have an honor culture and then showing homicide rates in the south are higher is very weak evidence supporting that theory.

The first person who had told me of that hypothesis in my PhD program at Albany was a professor named Colin Loftin. I think I laughed when he said it in class and I straight up asked “is this a real thing.” He was from Alabama and assured me it is, and told some story of a student fighting after bumping into another person walking in a hallway as an example. For those who do not know Colin, imagine a nerdy grandpa accountant with a well groomed white beard stepped out of a Hallmark movie. And he tells you southern people are prone to violence over trivial matters of honor – I was not convinced. As I said I grew up in a very rural area; Pennsylvania is sometimes described as Philadelphia, Pittsburgh, and Alabama in between. But I did not experience that type of violence at all.

Vance’s book is the first confirmation of the southern culture of violence that I believed. The area I grew up in was substantively less violent. I would consider it more northeastern vibes than what Vance describes as southern honor culture. I am sure I could find some apocryphal violent stories similar to what Vance describes if I prodded my relatives enough, but I did not take that to be a core part of our identity the same way Vance does.

Culture is hard to define. The ideal of “you work at all costs” I take as part of farming culture. It was not an ideal more generally held among the broader population though. I definitely was familiar with adults who were lazy, or adults who had the volatile lifestyle of Vance’s mother he describes in his book. But I was personally familiar with more people who day in and day out performed incredibly hard manual labor.

Another aspect of growing up, which I have coined the culture of danger, is harder to associate specific behaviors with. But it is something that I now recognize was constantly in the background, something we all took for granted as a given, but is not something that is more broadly accepted in other parts of society.

Farming is incredibly dangerous. There are steel pipes that transport the milk from the cow to the bulk tank. After you are done with a round of milking, the interior of the pipeline gets washed with a round of acid. Then the pipes are rinsed with a second round of a much more concentrated caustic solution to get rid of the residual acid. When I was around 6 years old, me and my older sister were playing in the barn and I spilled the more caustic solution on myself. They were housed in large plastic containers that had a squirt top on them (no different than the pump on your liquid hand soap). All it took was a push, and it melted my nipple off (through the shirt I was wearing at the time). Slightly different trajectory and I would be missing half of my face.

Like I said previously, I learned to operate heavy machinery before I was 10. I distinctly remember doing things in both the skidsteer and the tractor when I was young that scared me. Driving them in areas and in situations where I was concerned I would roll the machines. Skidsteers are like mini tanks with a bucket on the front. I do not think it had a seat belt, but if I rolled the skidsteer I think my chances were better than 50/50 to make it out (it had a roll cage, but if ejected and it rolled over you, you are going to be a pancake). If I rolled a tractor (these are very old tractors, no cab), I think your chances are less than 50% you get out without serious injury.

Learn to operate here meant I was given a short lesson and then expected to go do work, by myself without supervision. Looking back it was crazy, but of course when I was a kid it did not seem crazy.

Another example was climbing up silos. At the farm that I worked on, he did not have a machine to empty out the silo. So I would need to climb the silo, and fork out the silage for the cows feed. Imagine you took your lawn mower over a corn field, the clippings (both the cobs and the stalks) are what silage is.

I would climb up a ladder, around 50 feet, carrying a pitchfork. When I was 12. This was not a rare occurrence; this was one of my main responsibilities as a farmhand, I did this twice a day at least.

The cows were also fed a grain like mixture (that is similar to cereal, it did not taste bad). I have mixed feelings about the grass fed craze now, since the cows really enjoyed the grain and the silage (although I do not doubt a grass fed diet could be better for their overall health). And I do not know if feeding them just baled hay counts as grass fed, or if they need to be entirely fed with fresh grass from the field.

Some may read this and think the child labor was the issue. I do not think that that was a problem at all. To me there is no bright line between doing chores around the house and doing chores in the barn. I was paid when I worked on the neighbors farm, and it was voluntary. It was hard work, but it did not directly impact my schooling in any obvious way. No more than when a kid does sports or scouts. The operating heavy machinery when I was a child was crazy though, and the working conditions were equally dangerous for everyone.

Even more broadly, just driving around the area I lived was dangerous. There were six males I went to high school with that died in traffic accidents. So in a group of less than 300 males nearby me in age (+/- two years), six of them died. I can not remember what the roads MPH was graded at, but they were winding. I am not sure it matters what the MPH is, there is no way reasonable way to enforce driving standards on all those back roads.

Death and danger was just part of life. Johnny died in a car accident, Mary is having a yard sale, and Bobby rolled the tractor, his femur broke the skin but he was able to crawl to the road where Donny happened to be driving by, so he will be ok. So it goes.

The culture of danger as I remember it does not have as many direct manifest negative behaviors as does “honor culture” or “I am too lazy and too high on drugs to keep a job”. So maybe some real ethnographers would quibble with my description of it as a culture.

I do think this is distinct from how individuals in certain areas have a constant specter of interpersonal violence. I do not have any PTSD type symptoms from my experience, like Vance describes based on his experience with child abuse. In the end I suspect the area I grew up had worse early death rates than many urban areas with violence on a per capita basis, but the nature of it doesn’t have quite the same effect on the psyche. Ignorance is bliss I suppose, you get used to driving tractors on steep hills.

Steaks are for rich people

One of the places I remember eating at growing up was Hoss’s Steakhouse in Williamsport. So if we went to get clothes for school at the Lycoming Mall as a family, we would often eat there on the way home. It is a place with a nice salad bar. You could load up a salad with a pound of bacon bits, and have a soft serve ice cream on the side if you wanted.

Walking into the restaurant before you are seated are pictures on the walls of the meals. I was maybe 14 at the time, and waiting to be seated I pointed to one of the steak meals and said “I would like to get that”. The immediate response from my mother was “No you are not. You are getting the salad bar like always.” One of the ironic parts of this story to me is that, because I was working as an independent farmhand, I had my own money. I could have certainly paid for a steak dinner with my own cash.

After I was 16 and could drive, I did end up doing the majority of my own clothes shopping. I remember splurging on some really ugly yellow and green Puma sneakers one year (totally worth it, they were $80 and did get the “whoa nice shoes” comments at school as intended).

I have only recently developed a palette for steak. My son likes it and requests we go to the local steak house on occasion. I have for awhile actively encouraged him to buy whatever expensive steak is on the menu when we go out to eat (Waygu beef at the sushi place, that sounds interesting you should try that). Makes me feel like the god damn king of the world.

Soda and Beer

When I was young (less than 10 years old), during summers I would spend most of my time on the farm, but once a week visit my other grandparents. I would do two activities with my grandfather: either golf or go fishing. For golf we had a par three 9 hole course in my town. I cannot hit a driver straight to save my life, but my iron game is at the level where I would not embarrass myself.

Fishing was just in little ponds in the area, mostly sunfish and bass. We would bring a sandwich, two Pepsis for myself, and two beers for Grandpa.

I tell this story both because it is a fond memory, as well as it highlights one of the stark differences in my lifestyle now (in terms of healthy eating) vs back then. I am pretty sure I drank more soda than water growing up. Our well water was sulfurous, so it was quite unpleasant to drink. Soda was just a regular thing I remember everyone having, including kids.

Charles Murray in his Bell Curve proactively addresses most of the negative commentary you hear about right in the writing. But one thing I thought was slightly cringe at the time I read the book (sometime while getting my PhD) was his “middle class values index”. To be clear these were not things about healthy eating, but were more basic “received a high school diploma” and “have a job”. The items I did not object to, but the moniker of “middle class” I thought was an unnecessary jab for something that was so subjective.

In retrospect though, “feeding kids soda like it is water” and “driving around with open containers of beer” is the most apropos “not middle class values” I can think of. So now I do not hold that name against Murray. These are not idiosyncratic to rural areas, you can identify people in poor urban areas who behave like this as well. But you definitely do not need to worry about being pulled over for an open container while driving where I grew up.

In Pennsylvania at this time, to buy beer you needed to go to a distributor. You could not get a six pack at the gas station. You needed to get at least a 24 pack. I figure this limited the number of people purchasing beer, but I do wonder for those who did buy beer if it increased the amount of binge drinking.

Role Models and Choices

For a crazy dichotomy with how I grew up versus what I know now, one of the only role model career choices I remember individuals talking about growing up were teachers. Getting a job as a teacher at the school district was a well respected (and well paying) career option in my town.

An aspect of this that can only be understood when you are outside of it is how insular this perspective is. A kid wants to be a teacher, because that is pretty much the only career they are exposed to. This is from the perspective “I may need to go to school, but I will come back here and get a job as a teacher”.

I personally did not have much respect for my teachers in high school, so I never seriously considered that as a career option. I was an incredibly snarky and sarcastic kid. My older sister (and then later my younger sister) were salutatorians of their classes. I (quite intentionally) did not try very hard.

For one story, my physics teacher (who was friends with my father) called home to ask if I was doing ok since I slept in class. My mother asked what my grade was, and since it was an A, she did not care. For another, which I am embarrassed about now, I would intentionally give wrong answers (it was history or civics I believe) because the teacher would get upset. I found it hilarious at the time, but I realize this is incredibly churlish now (he cared that we were learning, which I cannot say for all of my teachers). Sorry Mr. Kirby.

So, I was not a super engaged student.

Vance talks about it seeming like the choices you make do not matter. I can understand that, it did not seem to me I was making any choices at all when I think back on it. My parents did always have an expectation that I would go to college for myself and my siblings. Working as a farmhand (for other peoples farms at that point) was never a serious option.

Both my parents had associates degrees, and my sister (who was two years older than me) went to Penn State for accounting. That was about the extent of college advice I got – you should go. I never had a serious conversation about where I should go or what I should go for. Choose your own adventure.

I remember signing myself up for the SAT. I took the test on a Saturday morning in a testing center a few towns over. I finished each of the sections very fast, and I scored decently for not practicing at all (a 1200 I believe, out of a possible 1600). I have consistently done excellent on math and poorly on the English portions of tests in my life; I think I had 700 in math and 500 in English for the SAT.

One funny part of this is that, until graduate school, I actually did not understand algebra. Given my area of expertise (statistical analysis and optimization now), many people think I am quite a math geek. I did have good teachers in high school, but I was able to score this high on the SAT through memorization. For example I knew the rule for derivatives for polynomials, but if you asked me to do a simple proof at that point I would not have had any clue how to do that.

When I say I did not understand algebra, I mean when I was given a word problem that I needed to use mathematical concepts to solve, I just figured it out in my head. It was not until graduate school that at some point I realized you can take words and translate them into mathematical equations.

I know now that this is somewhat common for intelligent people when learning math. I home school my son, and I noticed the same thing for him. So it took active engagement (forcing him to write down the algebraic equivalent, even when he could figure out the answer in his head). But just rote memorization can get you quite a good score on the SAT.

Individuals who want to get rid of standardized testing because poor people score worse on average is the wrong way to think about it. You should want to improve the opportunities for individuals to get better education, not stick your head in the sand and pretend those inequalities do not exist.

SAT results in hand, I remember asking the guidance counselor about school information for Bloomsburg University (I chose Bloomsburg because it was not Mansfield and not Penn State, and was cheaper). And her response was simply to hand me a single page flier.

I can understand my parents not giving decent college advice; they did not know about scholarships or what opportunities were available. The high school guidance counselor in her sloth though makes me angry in retrospect. Our grade had less than 80 kids – she could have spent a few minutes reviewing each of those kids backgrounds, and provided more tailored advice.

I am positive none of the kids in my class went to undergrad at any more advance school than Penn State (and even that may have only been one student in my class). Cornell (an ivy league school in Ithaca, New York) is actually closer than Penn State to where I lived – I did not know it existed when I was in high school. To be fair, I do not know if the guidance counselor knew my SAT score, but she could have asked. She certainly had access to my grades, could see I did well in STEM courses, and could have easily given suggestions like “you can apply for partial scholarships to many different places.”

This goes both ways, I knew several of my classmates that should not have gone to college. My best friend in high school was a solid B/C student, went to Mansfield for journalism, and stopped going in his junior year. He is doing fine now as a foreman for a natural gas company. Going to get a four year BA degree for him was a bad idea and waste of money. And it was doubly bad going for journalism.

The brain drain had not happened yet in my high school. The level of discourse in my high school classes was excellent. I noticed a significant regression in the level of discussion in my first classes at Bloomsburg relative to those in my high school. (Bloomsburg had an average SAT score for entrants of 1000.)

I am intelligent, but I was not the most intelligent in my class. There were easily 10 other people in my class that had comparable intelligence to me. All of whom would have likely qualified for at least partial Pell grant assistance.

For those in my class that did go to college, pretty much everyone went to one of the PASSHE schools (these are state schools in Pennsylvania, originally founded as normal schools that were intentionally spread out in rural areas). Most went to the closest nearby (Mansfield), but a few spread out to various institutions across the state (Lock Haven, Shippensburg, Indiana, etc.).

I have hobnobbed with ivy league individuals (professors and students) since getting my PhD. There is no fundamental difference between kids who go to ivy league schools and the kids I went to high school with. With a semester of SAT test prep, and a not lazy guidance counselor helping apply to scholarships, we could have had double digit number of kids accepted to prestigious institutions for zero cost.

Do Not Talk About Money

I remember asking my grandfather why barns were red. He said it was because red paint was cheaper. That was the only conversation I can remember in my childhood that discussed money in any form.

When going to college I was filling out my FAFSA form, and asked my father how much money he made. His response was “enough”. Vance brings up the idea that, ironically, going to nicer colleges is cheaper for poor people. But they have no clue about that. I am fairly sure I would have qualified for partial Pell grant assistance – I just left the section on your parents income blank on the FAFSA form.

Besides inept counsel on college, even though I had worked all these different jobs I only remember actively thinking about pay when I was working different jobs in college. The floor board factory was over $8 per hour. The ribbon factory was $11 per hour. Later when I worked as a security guard for Bloomsburg University I made $13 per hour.

Similar to Vance’s experience in Middletown, $13 per hour is quite decent to get an apartment and put food on a table for a family in that area of PA (at least at that time, 2004-2008). You are not saving up for retirement, but you shouldn’t need to live in the dregs and go hungry either.

In retrospect the advice I needed at the time (but never received) was real talk about pursuing careers. This is wrapped up in college, you go to college to prepare yourself for a career (the expectation I go to college was certainly not only to obtain a liberal arts education!)

I ended up choosing Bloomsburg University because I knew many of my classmates were going to a closer school (Mansfield) and just wanted to be different. There was no thought into choosing criminal justice as a major either. When folks ask me the question “what did you want to be when you grew up”, I can not remember actively thinking about any specific career. Even when I was young and hitting baseballs in the back yard, I knew that I was not going to be a professional baseball player.

I remember at one point in the middle of undergrad at Bloomsburg realizing that a criminal justice degree is not really vocational, and I could quit if I really wanted to just go and be a police officer (the only vocation I likely associated with the major). Which I did not really want to do. So I was debating on transferring to Bucknell for sociology, or some community college for whatever degree you get to work on HVAC systems. (I do not know where the Bucknell idea came from, I must of thought “it was fancy” or something relative to Bloomsburg.)

I could have used other advice, like “you can negotiate your wage”, but likely understanding my career options was the one thing that negatively effected my long term career progression. I do not mean to denigrate the HVAC job (given my background and what I know now, I am pretty sure that would have been a better return on investment than sociology at Bucknell!)

Not that I would go back in time and change anything (I received an excellent education at Bloomsburg in criminal justice, and ditto at SUNY Albany). But if someone somewhere said “hey Andy, you are pretty good at math, you should look into engineering”, my life trajectory would likely be very different.

Factory Jobs Suck

After I was able to drive at 16 I started to take other jobs outside of being a farmhand. These included being a line cook and dishwasher at a local restaurant where my aunt-in-law was a chef, and working for a company that did paving and seal-coating before I was 18. Cooking wasn’t bad. The restaurant was actually a mildly fancy steak and seafood place you needed a membership to eat. Thinking back, I am honestly confused how that many people where I grew up could afford a membership that would make that business model work.

Paving and seal-coating was comparable to the level of effort of farming. It was safer in the short term than farming (in a “I probably will not be maimed way”), but breathing in the fumes I am guessing would be worse long term. I do not remember my hourly wage (it may have just been the minimum wage), but I did get a bunch of overtime in summer which was nice.

When I went to college I then did various jobs as well. I worked at Kentucky Fried Chicken at the cash register at one point. On campus, jobs intended for undergraduate college students through the university, I worked as a carpenter building theater sets and as a tutor for the stats classes in the criminal justice department.

The last time I moved home though over summer break (after sophomore year at Bloomsburg) I got a job stacking uncut floorboards in a factory. This was my first factory job – we would stand on a conveyor belt down the machine that cut the boards. Our palettes would be stacked with a single size and we would rotate sizes after a while. So sometimes I am stacking 4 inch wide boards, another time I am stacking 12 inch wide boards, etc.

This was monotonous and hard work, but not crazy bad and I enjoyed my coworkers. Stacking hay bales was harder. Many of the people I worked with were on work release from the county jail. I got paid $8 an hour, they only got $4. But they were happy to do the work and not be sitting in jail. It is absurd that they did not receive the same pay. Most of them were in jail for DUIs.

After about a month of doing this job I had inflammation in my elbow. My elbow only had minor pain, but my arm would fall asleep when I slept and I would wake up in quite a lot of pain from that (so lack of sleep was really the bigger issue than my arm hurting). I asked to take the day off to go to the doctor (one of my coworkers said he had the same issue, but still worth working rather than sitting in jail). The owner mistakenly thought I was trying to get workers compensation, so told me no to the day off and I would be fired if I went to the doctor. (That was not my intention, I just wanted to get some pain medication.) So I just ended up quitting. Doctor said it was “tennis elbow,” and that it would only go away with rest, so I would have needed to quit anyway.

I then moved back to Bloomsburg for the summer (the town I grew up in was incredibly boring, hanging out in an empty college town was certainly a step up for a 20 year old). I got a job at a ribbon factory in Berwick (a neighboring town) for $11 dollars an hour. You could consider Berwick a doppelganger for Middletown as Vance describes it.

Working at the ribbon factory was barely manual labor. I would sit on a conveyor belt and either count bags to fill in boxes, or look at ribbons as they rolled by only to throw away malformed ones. This was soul sucking work. There was only one other younger person I befriended while working there, most everyone else was middle aged. I wondered to myself how these people survived this existence. Of all the jobs I have had in my lifetime this was easily the worst.

Despite having the “work every day” mentality from when I was young, I just stopped going to this job alittle over a month after I started worked there. I did not tell my boss I quit, just literally stopped going. It wasn’t a hard job, the opposite, it was easy. A second grader could do the job.

So this is often what I think about when people say “the factory jobs are going away,” or Vance’s example that the tile company that cannot get people to work for them. You have a choice, break your back or watch ribbons go by on a conveyor belt. I recognize that having people just take a paycheck from the government is not good for people long term. I think people need something to work to strive for and take pride in. Working in a factory is not that.

It was at this point (in between being a sophomore and junior) I went from just doing the bare minimum to get by for my classes at Bloomsburg to being actively engaged in my course work and putting in real effort. Working at the ribbon factory was the nadir. Having the more advanced upper level classes did make me more engaged. It was around this time I began working as a security guard for the university. I made the most for that position of any job I had at that point in my life, $13 per hour.

I worked night shift for the security guard job. There was one point in my schedule where I needed to stay awake for over 48 hours. I would get off at 4AM, and if I went to sleep I would not wake back up (even with an alarm) for a 9AM class. So I would have to stay up, go to class, and then sleep for an extended period of time.

By my senior year of college, I was back in the working crazy all the time stage. At one point I had three jobs (college tutor for statistics classes, working as a security guard, and even had work to help with statistical analysis for the local school). This is in addition to being a full time student.

Trailers and Going to Grad School

In the summer between junior and senior year at Bloomsburg University, I had an internship with state parole. The officer I shadowed had an area that covered the counties around Scranton, so a mix of rundown rust belt towns (like Berwick) but also more rural areas. There were more people living in trailers on single lots than trailers in trailer parks.

The first house call I shadowed was an individual who only had a few more weeks on his sentence. He was very nervous and sweaty (which was my first house call I witnessed, so I did not think much of it at the time). The parole officer had the individual do a urine sample. I found out later that he failed (heroin), and the parole officer said the reason he was nervous is that they take multiple officers to arrest individuals, so he likely thought he was being taken back to prison. It probably was not the failed drug test, which him being that close to finished would just result in a warning.

Shadowing parole was an eye opening experience. I had lived in rural areas, but I had been mostly sheltered from the decrepit lifestyle some people lived. Some houses the parole officer would make his parolees meet us outside, as he would refuse to go inside the house. People would not let their animals out (so the house smelled of the strong ammonia scent from the urine, much worse than the barn). Houses with kids sleeping on mattresses in the living room and fleas. I knew people like this existed, but seeing them firsthand was different.

Matthew Desmond’s book, Evicted, in which he follows the lifestyle of various individuals trying to scrape by in Milwaukee, reminded me quite a bit of my time with parole. A bunch of people who could not make two good decisions in a row if their life depended on it. I presume getting drunk before your scheduled parole visit or not cleaning your sink and getting evicted are consequences of the same inability to make good long term decisions.

All of those individuals had no fundamental reason they needed to live in filthy conditions. You can take the cat litter out. People dig on trailers and trailer parks, but living in a trailer is not fundamentally bad. It is no different than living in a small apartment.

So I had planned on applying to be a parole officer after this experience. It was likely I would not be assigned the field area I did my internship, but either assigned a position in a prison (they have officers inside state prison help with offenders release plans). Or maybe be assigned in the field in the Philadelphia area. So I took the civil service exam to be a parole officer in the fall semester of my senior year.

I had made a mistake though, I had taken the exam too early. They called and asked if I could go to training in the spring. I said I could not do that, as I wanted to finish my degree. (The parole officer I shadowed had quit one semester early, and he did say he regretted that.) The civil service exams in Pennsylvania had a rule that you could not re-take them within a certain time frame, so I did not have the ability to take them again when the timing would have worked out better.

So at this point in the fall semester I decided to apply to graduate school, not really knowing what I was getting into. The other option was applying to different police departments in the spring (I remembered Allentown and Baltimore had come to classes to recruit). I did well on the GREs, and was accepted into SUNY Albany (my top choice) quite early. I had also applied to Delaware. SUNY was somewhat unique, in that you could apply straight into the PhD program from undergrad. I did not realize this at the time, but this was very fortunate, as PhD programs were funded. I would have racked up a bill for the masters degree at Delaware.

When I ended up getting into grad school at SUNY Albany, Julie Horney called me in the afternoon of one of my binge sleep sessions in my night security guard schedule to say I was invited for orientation. I do not remember what I said on the phone call, I remember getting up later and not being sure if that was a dream or it had really happened.

Later that spring when I visited Albany I headed straight up from my night shift to the orientation day. I remember being confused, thinking this was an interview and still not 100% guaranteed I was in. I said something to Dana Peterson and her response was along the lines of “you do not have to worry Andy, you have gotten in”.

Going to Albany ended up being one of the greatest decisions of my life. The academic atmosphere of a PhD program was totally different and fundamentally changed me. It would be a lie though if I said it was something I intentionally pursued, as opposed to a series of random happenstance factors in my life that pushed me to do that. I really had no clue what I was getting into.

Drugs

Growing up in Bradford county I had very little exposure to drugs. My friends and I would pinch beer and liquor from our parents on occasion, but in the grand scheme of things I was a pretty square kid. I knew some individuals smoked pot, but I did not know anyone in high school who did heroin or other opiates. Unlike Vance, serious drug or alcohol abuse was not something I personally witnessed in my family.

It was around the beginning of 2000’s that the gradual increase in opioid overdose deaths started to happen across the US. In the town with the ribbon factory, Berwick, heroin usage was an issue. My sister in law (who grew up in Bloomsburg) ultimately died due to long term heroin usage. I worked on a grant to help Columbia county analyze the Monitoring the Future survey (a behavioral health survey that all students in Pennsylvania took). There were a few students in each grade cohort (as young as seventh grade) who stated they used heroin.

If you look at maps of drug overdose deaths in this time period, you can see a cluster start to form around the Scranton area by around 2005. This area in western Pennsylvania is close enough to commute to New York City. It is possible the supply networks for heroin from more urban areas were established that way.

I suspect it is also related to working labor jobs though. One reason my grandfather retired from farming was because he had chronic shoulder pain. I would drive him to his visits to the VA hospital in Harrisburg on occasion, in a full size van, when I was a teenager. He was prescribed oxycodone, but knew that it was addictive, so he would take them one week and then abstain the following week.

I do not know how you work these labor jobs and not have some type of chronic pain. It is hard for me to imagine working these jobs for thirty years without them killing you. I recently had a kidney stone, and I was stuck waiting for several hours in the ER before I was able to get a fentanyl drip. Went from pulsating pain and throwing up to relief almost instantly. I was prescribed oxycodone to use at home before I passed the stone. I did not take it.

When I was a professor at the University of Texas at Dallas, a well respected qualitative criminologist came to give a talk. He discussed his recent work, an ethnography of methamphetamine users in rural Alabama. His main thesis in the talk, which is something I think only an academic sociologist could come up with, is that women were influenced by their boyfriends to begin taking meth. (This is true, but you could do the same talk and say men were introduced to drugs by their female partners.) I asked at the end of his talk whether he thought his findings extended to heroin users, and his response was that heroin is an urban drug problem.

Another main point of his talk was to take pictures of his subjects in realistic settings. The idea being that most drug users are depicted in the media in a negative light. So we should take pictures of them so people do not think they are monsters. The lecture was mostly pictures of peoples trailers, and pillow talk his interviews discussed that resulted in snickers from the audience at various points.

Taking pictures of trailers does not humanize poor people. It makes you look like you are Steve Irwin describing wildlife in the outback – I personally thought it was incredibly degrading.

The idea that you shouldn’t show the negative impacts of hard drug use is such a comically white knight perspective I am not sure whether it makes me want to laugh or cry. I did not take that oxycodone because I have seen, with my own eyes, what happens to people who are addicted to opioids. The most recent wave of fentanyl laced with xylazine can result in open sores and losing phalanges. I do not believe those people are monsters (does anyone?) but it is grotesque what drug addiction can do to people.

I suggest to read Vance’s discussion of his life growing up, over any sociologist, because of this. When what we have to offer is “some people think people who take drugs are monsters” and “you should take nice pictures of them”, people are well justified to ignore academics as out of touch and absurd.

Lament

If I had the chance to sit down with Vance, one thing I would ask him is his choice of elegy in the title of his book. I name my blogpost lament. I have moved on with my life, my work focuses on working with police departments on public safety. This work entirely focuses on urban areas.

I do not think anything my research relates to could, even at the margins, help materially improve the lives of people I grew up with. There are things I think could marginally improve individuals outcomes, such as getting better advice about colleges. But there is nothing reasonable to be done to prevent traffic accidents or improve farm safety. I mean you could attempt stricter safety regulations but realistically enforcing them is another matter.

Vance in his book does not really talk about politics, but towards the end gives some examples of policies he thinks could on the margins help – such as restricting the number of section 8 vouchers in a neighborhood (to prevent segregation). He is right that you cannot magically subsidize factory jobs and all will be well – it will not. These jobs, as I said above, suck.

I view the current state of rural America, as I experienced it, via Murray’s Bell Curve. Specifically the idea of brain drain, and more broadly intellectual segregation. One of Murray’s theses was that, historically, rural communities had a mix of intelligent (and not so intelligent) individuals. Gradually over time, the world has become more globalized, so it is easier to move from the farm to industrialized areas.

This results in intelligent people – those who can go to college and keep a job – to move away. What is left over is a higher proportion of the types of people Vance more focuses on in his book – individuals with persistent life problems. Criminologists will recognize this process as fundamentally the same with urban blight areas described by the Chicago school of crime. Vance focuses on the culture that is the end result of this demographic process – the only people who don’t move away are the ones who live hard lives that Vance describes in his book.

Automation is the long term progression of farming in America. Farming in rural areas will eventually be just the minimal number of humans needed to oversee the automated machinery. I am not sure the town I grew up in will exist in 100 years.

And this to me is not a bad thing. I left to pursue opportunities that were not available to me if I stayed in Bradford county. I am not sad that I do not need to sling square bales of hay. My suggestion, to help give better advice to students about pursuing college and careers, will only hasten the demise of rural areas, not save them. This is the lament.

Politics aside, I found Vance’s biography of his life growing up worth reading. If you find my stories interesting, I suspect you will find his as well.

Some musings on plagiarism

There have been several recent high profile cases of plagiarism in the news recently. Chris Rufo, a conservative pundit, has identified plagiarism in Claudine Gay and Christina Cross’s work. In a bit of tit-for-tat, Neri Oxman was then profiled for similar sins (as her husband was one of the most vocal critics of Gay).

Much of the discussion around this seems to be more focused on who is doing the complaining than the actual content of the complaints. It can simultaneously be true that Chris Rufo has ulterior motives to malign those particular scholars (ditto for those criticizing Neri Oxman), but the complaints are themselves legitimate.

What is plagiarism? I would typically quote a specific definition here, but I actually do not like the version in the Merriam Webster (or American Heritage) dictionary. I assert plagiarism is the direct copying of material without proper attribution to its original source. The concept of “idea” plagiarism seems to muddy up the waters – I am ignoring that here. That is, even if you don’t “idea” plagiarize, you can do “direct copying of material without proper attribution”.

It may seem weird that you can do one without the other, but many of the examples given (whether by Rufo or those who pointed out Neri Oxman’s) are insipid passages that are mostly immaterial to the main thesis of the paper, or they have some form of attribution but it is not proper. So it goes, the way academics write you can cut out whole cloth large portions of writing and it won’t make a difference to the story line.

These passages however are clearly direct copying of material without proper attribution. I think Cross’s is a great example, here is one (of several Rufo points out):

Words are so varied, if I gave you a table of descriptions, something like:

survey years: 1998, 2000, ...., 2012
missing data: took prior years survey values

And asked 100 scholars to put that in a plain text description in a paragraph, all 100 would have some overlap in wording, but not near this extent. For numbers, these paragraphs have around 30 words, say your word vocabulary relevant to describe that passage is 100 words, the overlap would be under 10% of the material. Here is a simple simulation to illustrate.

And this does not consider the word order (and the fact that the corpus is much larger than 100 words). Which will both make the probability of this severe of overlap much smaller.

Unlike what ASA says, it does not matter that what was copied is a straightforward description of a dataset, it clearly still is plagiarism in terms of “direct copying without proper attribution”. The Neri Oxman examples are similar, they are direct copying, even if they have an in text citation at the end, that is not proper attribution. If these cases went in front of ethics boards at universities that students are subject to, they would all be found guilty of plagiarism. The content of what was copied and its importance to the work does not matter at all.

So the defense of the clearly indefensible based on ideological grounds among academics I find quite disappointing. But, as a criminologist I am curious as to its prevalence – if you took a sample of dissertations or peer reviewed articles, how many would have plagiarized passages? Would it be 1%, or 10%, or 50%, I genuinely do not know (it clearly won’t be 0% though!).

I would do this to my own articles if I could easily (I don’t think I did, but it is possible I self-plagiarized portions of my dissertation in later peer reviewed articles). So let me know if you have Turnitin (or are aware of another service), that can upload PDFs to check this.

I’d note here that some of the defense of these scholars is the “idea” part of plagiarism, which is a shifting sands definition of saying it is necessary to steal some fundamental idea for it to be plagiarism. Idea plagiarism isn’t really a thing, at least not a thing anymore than vague norms among academics (or even journalists). Scooping ideas is poor form, but that is it.

I am not aware of my words being plagiarized, I am aware of several examples of scholars clearly taking work from my code repositories or blog posts and not citing them. (As a note to scholars, it is fine to cite my blog posts, I maybe have a dozen or so citations to my blog at this point.) But those would not typically be considered plagiarism. If someone copies one of my functions and applies it to their own data, it is not plagiarism as typically understood. If I noted that and sent those examples to COPE and asked for the articles to be retracted, I am pretty sure COPE would say that is not plagiarism as typically treated.

Honestly it does not matter though. I find it obnoxious to not be cited, but it is a minor thing, and clearly does not impact the quality of the work. I basically expect my work on the blog to be MIT licensed (subject to change) – it is mostly a waste of time for me to police how it is used. Should Cross be disciplined for her plagiarism? Probably not – if it was an article I would suggest a correction would be sufficient.

I can understand students may be upset that they are held to higher standards than their professors, but I am not sure if that means students should be given more slack or if professors should be given less. These instances of plagiarism by Gay/Cross/Oxman I take more as laziness than anything, but they do not have much to do with whether they are fit for their jobs.

AI writing

Pretty soon, even my plagiarism definition is not going to work anymore, as generative chatbots are essentially stochastic parrots – everything they do is paraphrasing plagiarism, but in a form that is hard to see direct copying (that tools like Turnitin will identify). So I am starting to get links to the blog from Perplexity and Iask. So people may cite ChatGPT or whatever service generated the text, but that service copied from someone else, and no one is the wiser.

These specific services have paraphrasing citations, e.g. you ask a question, it gives the response + 3 citations (these are called RAG applications in LLM speak). So you may think they are OK in terms of above, they give paraphrasing type at the end of paragraph citations. But I have noticed they frequently spit out near verbatim copies of my work for a few blog posts I get traffic for. The example I have linked to was this post on fitting a beta-binomial model in python. So I intentionally wrote that post because the first google results were bad/wrong and ditto for the top stackoverflow questions. So I am actually happy my blog got picked up by google quite fast and made it to the top of the list.

These services are stochastic, so subject to change, but iask is currently a direct copy of my work and does not give me as a list of its citations (although I come up high in their search rankings, so not clear to me why I am not credited). Even if it did though it would be the same problem as Oxman, it would not be a proper quoted citation.

And the Cross faux pas edit a few words will not only be common but regular with these tools.

This is pretty much intended though from my perspective with these tools – I saw a wrong on the internet and wrote a post to make it so others do not make the same mistakes. Does it matter if the machine does not properly attribute me? I just want you to do your stats right, I don’t get a commission if you properly cite me. (I rather you just personally hire me to help than get a citation!)

I think these services may make citation policing over plagiarism impossible though. No major insights here to prevent it – I say now it is mostly immaterial to the quality of the work, but I do think saying you can carte blanche plagiarize is probably not good.

Recoding America review, Data Science CV Update, Sworn Dashboard

Over this Christmas break I read Jennifer Pahlka’s Recoding America. It is a great book and I really recommend it.

My experience working in criminal justice is a bit different than Pahlka’s examples, but even if you are just interested in private sector product/project management this is a great book. It has various user experience gems as well (such as forms that eliminate people, put the eliminating questions in order by how many people they filter).

Pahlka really digs on waterfall, I have critiqued agile on the blog in the past, but we are both just using generic words to describe bad behavior. I feel like a kindred spirit with Pahlka based on some of her anecdotes; concrete boats, ridiculous form questions, PDF inputs that only work on ancient web-browsers, mainframes are not the problem stupid requirements are, hiring too many people makes things worse, people hanging up on them in phone calls when you tell the truth – so many good examples.

To be specific with agile/waterfall, Pahlka is very critical of fixed requirements coming down on high from policy makers. When you don’t have strong communication at the requirements gathering stage between techies, users and owners making the requests (which can happen in private sector too), you can get some comical inefficiencies.

A good example for my CJ followers are policies to do auto-clearance of records in California. So the policy makers made a policy that said those with felony convictions for stealing less than $1,000 can be expunged, but there is no automated way to do this, since the criminal records do not save the specific dollar amount in the larceny charge. (And to do the manual process is very difficult, so pretty much no one bothers.) It probably would make more sense to say something like “a single felony larceny charge that is 5 years old will be auto-cleared”, that is not exactly the same but is similar in spirit to what the legislature wants, and can be easily automated based on criminal records that are already collected by the state. A real effective solution would look like data people working with policy makers directly and giving scenarios “if we set the criteria to X, it will result in Y clearances”. These are close to trivial things to ask a database person to comment on, there is no fundamental reason why policy/techs can’t go back in forth and craft policy that makes sense and is simpler to implement.

To be more generic, what can happen is someone requests X, X is really hard/impossible, but you can suggest a,b,c instead that is easier to accomplish and probably meets the same high level goals. There is asymmetry in what people ask for and understanding of the work it takes to accomplish those requests, an important part of your job as a programmer/analyst is to give feedback to those asking to make the requests better. It takes understanding from the techies of the business requirements (Pahlka suggests govt should hire more product owners, IMO would rather just have senior developer roles do that stuff directly). And it takes people asking to be open to potential changes. Which most people are in my experience, just sometimes you get people who hang up in phone calls when you don’t tell them what they want to hear.

I actually like the longer term plan out a few months waterfall approach (I find that easier to manage junior developers, I think the agile shorter term stuff is too overbearing at times). But it requires good planning and communication between end users and developers no matter whether you say you are doing waterfall or agile. My experience in policing is not much like the policy people giving stone tablets, I have always had more flexibility to give suggestions in my roles. But I do think many junior crime analysts need to learn to say “you asked for percent change, here is a different stat instead that is better for what you want”.

What I am trying to do with CRIME De-Coder is really consistent with Pahlka’s goals with Code for America. I think it is really important for CJ agencies to take on more human investment in tech. Part of the reason I started CRIME De-Coder was anger – I get angry when I see cities pay software vendors six digits for crappy software that a good crime analyst could do. Or pay a consulting firm 6 figures for some mediocre (and often inappropriate) statistical analysis. Cities can do so much better by internally developing skills to take on many software projects, which are not moving mountains, and often outside software causes more problems than they solve.


At work we are starting to hire a new round of data scientists (no links to share, they are offshore in India, and first round is through a different service). The resume over-stating technical expertise for data scientists is at lol levels at this point. Amazing how everyone is an LLM, deep learning, and big data expert these days.

I’ve written before how I am at a loss on how to interview data scientists. The resumes I am getting are also pretty much worthless at this point. One problem I am seeing in these resumes is that people work on teams, so people can legitimately claim “I worked on this LLM”, but when you dig in and ask about specifics you find out they only contributed this tiny thing (which is normal/OK). But the resumes look like they are Jedi masters in advanced machine learning.

I went and updated my data science resume in response to reading others. (I should probably put that in HTML, so it shows up in google search results.) I don’t really have advice for folks “what should your resume look like” – I have no clue how recruiters view these things. No doubt my resume is not immune to a recruiter saying “you have 10+ years with python, but I don’t see any Jira experience, so I don’t think you are qualified”.

What I have done is only include stuff in the resume where I can link to specific, public examples (peer reviewed work, blog posts, web pages, github). I doubt recruiters are going to click on a single link in the resume (let alone all 40+), but that is what I personally would prefer when I am reviewing a resume. Real tangible stuff so someone can see I actually know how to write code.

So for example in the most recent update of the resume, I took Unix, Kubernetes/Docker, Azure, and Databricks off. Those are all tech I have worked with at HMS/Gainwell, but do not have any public footprint to really show off. I have some stuff on Docker on the blog, but nothing real whiz bang to brag about. And I have written some about my deployment strategy for python code in Databricks using github actions. (I do like Azure DevOps pipelines, very similar to building github actions, which are nice for many of the batch script processes I do. My favorite deployment pattern at work is using conda + persistent Fedora VMs. Handling servers/kubernetes everything docker is a total pain.) “Expertise” in those tools is probably too strong, I think claiming basic competence is reasonable though. (Databricks has changed so much in the two years we have been using it at work I’m not sure anyone outside of Databricks themselves could claim expertise – only if you are a very fast learner!)

But there is no real fundamental way for an outsider to know I have any level of competence/expertise in these tech tools. Honestly they do not matter – if you want me to use google cloud or AWS for something equivalent to Azure DevOps, or Snowflake instead of Databricks, it doesn’t really matter. You just learn the local stack in a month or two. Some rare things you do need very specialized tech skills, say if someone wanted me to optimize latency in serving pytorch LLMs, that will be tough given my background. Good luck posting that position on LinkedIn!

But the other things I list, I can at least pull up a web page to say “here is code I wrote to do this specific thing”. Proof is in the pudding. Literally 0 of the resumes I am reviewing currently have outside links to any code, so it could all be made up (and clearly for many people is embellished). I am sure people think mine is embellished as well, best I can do to respond to that is share public links.


For updates on CRIME De-Coder:

I researched ways to do payments for so long, in the end just turning on WooPayments in wordpress (and using an iframe) was a super simple solution (and it works fine for digital downloads and international payments). Will need to figure out webhooks with Stripe to do more complicated stuff eventually (like SaaS services, licenses, recurring payments), but for now this set up works for what I need.

I will start up newsletters again next week.

The sausage making behind peer review

Even though I am not on Twitter, I still lurk every now and then. In particular I can see webtraffic referrals to the blog, so I will go and use nitter to look it up when I get new traffic.

Recently my work about why I publish preprints was referenced in a thread. That blog post was from the perspective of why I think individual scholars should post preprints. The thread that post was tagged in was not saying from a perspective of an individual writer – it was saying the whole idea of preprints is “a BIG problem” (Twitter thread, Nitter Thread).

That is, Dan thinks it is a problem other people post preprints before they have been peer reviewed.

Dan’s point is one held by multiple scholars in the field (have had similar interactions with Travis Pratt back when I was on Twitter). Dan does not explicitly say it in that thread, but I take this as pretty strong indication Dan thinks posting preprints without peer review is unethical (Dan thinks postprints are ok). The prior conversations I had with Pratt on Twitter he explicitly said it was unethical.

The logic goes like this – you can make errors, so you should wait until colleagues have peer reviewed your work to make sure it is “OK” to publish. Otherwise, it is misleading to readers of the work. In particular people often mention the media uncritically reporting preprint articles.

There are several reasons I think this opinion is misguided.

One, the peer review system itself is quite fallible. Having received, delivered, and read hundreds of peer review reports, I can confidently say that the entire peer review system is horribly unreliable. It has both a false negative and a false positive problem – in that things that should be published get rejected, and things that should not be published get through. Both happen all the time.

Now, it may be the case that the average preprint is lower quality than a peer reviewed journal article (given selection of who posts preprints I am actually not sure this is the case!) In the end though, you need to read the article and judge the article for yourself – you cannot just assume an article is valid simply because it was published in peer review. Nor can you assume the opposite – something not peer reviewed is not valid.

Two, the peer review system is vast currently. To dramatically oversimplify, there are “low quality” (paid for journals, some humanities journals, whatever journals publish the “a square of chocolate and a glass of red wine a day increases your life expectancy” garbage), and “high quality” journals. The people who Dan wants to protect from preprints are exactly the people who are unlikely to know the difference.

I use scare quotes around low and high quality in that paragraph on purpose, because really those superficial labels are not fair. BMC probably publishes plenty of high quality articles, it just happened to also publish an a paper that used a ridiculous methodology that dramatically overestimated vaccine adverse effects (where the peer reviewers just phoned in superficial reviews). Simultaneously high quality journals publish junk all the time, (see Crim, Pysch, Econ, Medical examples).

Part of the issue is that the peer review system is a black box. From a journalists perspective you don’t know what papers had reviewers phone it in (or had their buddies give it a thumbs up) versus ones that had rigorous reviews. The only way to know is to judge the paper yourself (even having the reviews is not informative relative to just reading the paper directly).

To me the answer is not “journalists should only report on peer reviewed papers” (or the same, no academic should post preprints without peer review) – all consumers need to read the work for themselves to understand its quality. Suggesting that something that is peer reviewed is intrinsically higher quality is bad advice. Even if on average this is true (relative to non-peer reviewed work), any particular paper you pick up may be junk. There is no difference from the consumer perspective in evaluating the quality of a preprint vs a peer reviewed article.

The final point I want to make, three, is that people publish things that are not peer reviewed all the time. This blog is not peer reviewed. I would actually argue the content I post here is often higher quality than many journal articles in criminology (due to transparent, reproducible code I often share). But you don’t need to take my word for it, you can read the posts and judge that for yourself. Ditto for many other popular blogs. I find it pretty absurd for someone to think me publishing a blog is unethical – ditto for preprints.

No point in arguing with peoples personal opinions about what is ethical vs what is not though. But thinking that you are protecting the public by only allowing peer reviewed articles to be reported on is incredibly naive as well as paternalistic.

We would be better off, not worse, if more academics posted preprints, peer review be damned.

Some notes on synthetic control and Hogan/Kaplan

This will be a long one, but I have some notes on synthetic control and the back-and-forth between two groups. So first if you aren’t familiar, Tom Hogan published an article on how the progressive District Attorney (DA) in Philadelphia, Larry Krasner, in which Hogan estimates that Krasner’s time in office contributed to a large increase in the number of homicides. The control homicides are estimated using a statistical technique called synthetic control, in which you derive estimates of the trend in homicides to compare Philly to based on a weighted average of comparison cities.

Kaplan and colleagues (KNS from here on) then published a critique of various methods Hogan used to come up with his estimate. KNS provided estimates using different data and a different method to derive the weights, showing that Philadelphia did not have increased homicides post Krasner being elected. For reference:

Part of the reason I am writing this is if people care enough, you could probably make similar back and forths on every synth paper. There are many researcher degrees of freedom in the process, and in turn you can make reasonable choices that lead to different results.

I think it is worthwhile digging into those in more detail though. For a summary of the method notes I discuss for this particular back and forth:

  • Researchers determine the treatment estimate they want (counts vs rates) – solvers misbehaving is not a reason to change your treatment effect of interest
  • The default synth estimator when matching on counts and pop can have some likely unintended side-effects (NYC pretty much has to be one of the donor cities in this dataset)
  • Covariate balancing is probably a red-herring (so the data issues Hogan critiques in response to KNS are mostly immaterial)

In my original draft I had a note that this post would not be in favor of Hogan nor KNS, but in reviewing the sources more closely, nothing I say here conflicts with KNS (and I will bring a few more critiques of Hogan’s estimates that KNS do not mention). So I can’t argue much with KNS’s headline that Hogan’s estimates are fatally flawed.

An overview of synthetic control estimates

To back up and give an overview of what synth is for general readers, imagine we have a hypothetical city A with homicide counts 10 15 30, where the 30 is after a new DA has been elected. Is the 30 more homicides than you would have expected absent that new DA? To answer this, we need to estimate a counterfactual trend – what the homicide count would have been in a hypothetical world in which a new progressive DA was not elected. You can see the city homicides increased the prior two years, from 10 to 15, so you may say “ok, I expected it to continue to increase at the same linear trend”, in which case you would have expected it to increase to 20. So the counterfactual estimated increase in that scenario is observed - counterfactual, here 30 - 20 = 10, an estimated increase of 10 homicides that can be causally attributed to the progressive DA.

Social scientists tend to not prefer to just extrapolate prior trends from the same location into the future. There could be widespread changes that occur everywhere that caused the increase in city A. If homicide rates accelerated in every city in the country, even those without a new progressive DA, it is likely something else is causing those increases. So say we compare city A to city B, and city B had a homicide count trend during the same time period 10 15 35. Before the new DA in city A, cities A/B had the same pre-trend (both 10 15). The post time period City B increased to 35 homicides. So if using City B as the counterfactual estimate, we have the progressive DA reduced 5 homicides, again observed - counterfactual = 30 - 35 = -5. So even though city A increased, it increased less than we expected based on the comparison city B.

Note that this is not a hypothetical concern, it is pretty basic one that you should always be concerned about when examining macro level crime data. There has been national level homicide increases over the time period when Krasner has been in office (Yim et al, 2020, and see this blog post for updates. U.S. city homicide rates tend to be very correlated with each other (McDowall & Loftin, 2009).

So even though Philly has increased in homicide counts/rates when Krasner has been in office, the question is are those increases higher or lower than we would expect. That is where the synthetic control method comes in, we don’t have a perfect city B to compare to Philadelphia, so we create our own “synthetic” counter-factual, based on a weighted average of many different comparison cities.

To make the example simple, imagine we have two potential control cities and homicide trends, city C1 0 30 20, and city C2 20 0 30. Neither looks like a good comparison to city A that has trends 10 15 30. But if we do a weighted average of C1 and C2, with the weights 0.5 for each city, when combined they are a perfect match for the two pre-treatment periods:

C0  C1 Average cityA
 0  20   10     10
30   0   15     15
20  30   25     30

This is what the synthetic control estimator does, although instead of giving equal weights it determines the optimal weights to match the pre-treatment time period given many potential donors. In real data for example C0 and C1 may be given weights of 0.2 and 0.8 to give the correct balance based on the prior to treatment time periods.

The fundamental problem with synth

The rub with estimating the synth weights is that there is no one correct way to estimate the weights – you have more numbers to estimate than data points. In the Hogan paper, he has 5 pre time periods, 2010-2014, and he has 82 potential donors (99 other of the largest cities in the US minus 17 progressive prosecutors). So you need to learn 82 numbers (the weights) based on 5 data points.


Side note: you can also consider matching on covariates additional data points, although I will go into more detail on how matching on covariates is potentially a red-herring. Hogan I think uses an additional 5*3=15 time varying points (pop, cleared homicide, homicide clearance rates), and maybe 3 additional time invariant (median income, 1 prosecutor categorization, and homicides again!). So maybe has 5 + 15 + 3 = 23 data points to match on (so same fundamental problem, 23 numbers to learn 82 weights). I am just going to quote the full passage on Hogan (2022a) here where he discusses covariate matching:

The number of homicides per year is the dependent variable. The challenge with this synthetic control model is to use variables that both produce parallel trends in the pre-period and are sufficiently robust to power the post-period results. The model that ultimately delivered the best fit for the data has population, cleared homicide cases, and homicide clearance rates as regular predictors. Median household income is passed in as the first special predictor. The categorization of the prosecutors and the number of homicides are used as additional special predictors. For homicides, the raw values are passed into the model. Abadie (2021) notes that the underlying permutation distribution is designed to work with raw data; using log values, rates, or other scaling techniques may invalidate results.

This is the reason why replication code is necessary – it is very difficult for me to translate this to what Hogan actually did. “Special” predictors here are code words for the R synth package for time-invariant predictors. (I don’t know based on verbal descriptions how Hogan used time-invariant for the prosecutor categorization for example, just treats it as a dummy variable?) Also only using median income – was this the only covariate, or did he do a bunch of models and choose the one with the “best” fit (it seems maybe he did do a search, but doesn’t describe the search, only the end selected result).

I don’t know what Hogan did or did not do to fit his models. The solution isn’t to have people like me and KNS guess or have Hogan just do a better job verbally describing what he did, it is to release the code so it is transparent for everyone to see what he did.


So how do we estimate those 82 weights? Well, we typically have restrictions on the potential weights – such as the weights need to be positive numbers, and the weights should sum to 1. These are for a mix of technical and theoretical reasons (having the weights not be too large can reduce the variance of the estimator is a technical reason, we don’t want negative weights as we don’t think there are bizzaro comparison areas that have opposite world trends is a theoretical one).

These are reasonable but ultimately arbitrary – there are many different ways to accomplish this weight estimation. Hogan (2022a) uses the R synth package, KNS use a newer method also advocated by Abadie & L’Hour (2021) (very similar, but tries to match to the closest single city, instead of weights for multiple cities). Abadie (2021) lists probably over a dozen different procedures researchers have suggested over the past decade to estimate the synth weights.

The reason I bring this up is because when you have a problem with 82 parameters and 5 data points, the problem isn’t “what estimator provides good fit to in-sample data” – you should be able to figure out a estimator that accomplishes good in-sample fit. The issue is whether that estimator is any good out-of-sample.

Rates vs Counts

So besides the estimator used, you can break down 3 different arbitrary researcher data decisions that likely impact the final inferences:

  • outcome variable (homicide counts vs homicide per capita rates)
  • pre-intervention time periods (Hogan uses 2010-2014, KNS go back to 2000)
  • covariates used to match on

Lets start with the outcome variable question, counts vs rates. So first, as quoted above, Hogan cites Abadie (2021) for saying you should prefer counts to rates, “Abadie (2021) notes that the underlying permutation distribution is designed to work with raw data; using log values, rates, or other scaling techniques may invalidate results.”

This has it backwards though – the researcher chooses whether it makes sense to estimate treatment effects on the count scale vs rates. You don’t goal switch your outcome because you think the computer can’t give you a good estimate for one outcome. So imagine I show you a single city over time:

        Y0    Y1    Y2
Count   10    15    20
Pop   1000  1500  2000

You can see although the counts are increasing, the rate is consistent over the time period. There are times I think counts make more sense than rates (such as cost-benefit analysis), but probably in this scenario the researcher would want to look at rates (as the shifting denominator is a simple explanation causing the increase in the counts).

Hogan (2022b) is correct in saying that the population is not shifting over time in Philly very much, but this isn’t a reason to prefer counts. It suggests the estimator should not make a difference when using counts vs rates, which just points to the problematic findings in KNS (that making different decisions results in different inferences).

Now onto the point that Abadie (2021) says using rates is wrong for the permutation distribution – I don’t understand what Hogan is talking about here. You can read Abadie (2021) for yourself if you want. I don’t see anything about the permutation inferences and rates.

So maybe Hogan mis-cited and meant another Abadie paper – Abadie himself uses rates for various projects (he uses per-capita rates in the 2021 cited paper, Abadie et al., (2010) uses rates for another example), so I don’t think Abadie thinks rates are intrinsically problematic! Let me know if there is some other paper I am unaware of. I honestly can’t steelman any reasonable source where Hogan (2022a) came up with the idea that counts are good and rates are bad though.

Again, even if they were, it is not a reason to prefer counts vs rates, you would change your estimator to give you the treatment effect estimate you wanted.


Side note: Where I thought the idea with the problem with rates was going (before digging in and not finding any Abadie work actually saying there is issues with rates), was increased variance estimates with homicide data. So Hogan (2022a) estimates for the synth weights Detroit (0.468), New Orleans (NO) (0.334), and New York City (NYC) (0.198), here are those cities homicide rates graphed (spreadsheet with data + notes on sources).

You can see NO’s rate is very volatile, so is not a great choice for a matched estimator if using rates. (I have NO as an example in Wheeler & Kovandzic (2018), that much variance though is fairly normal for high crime not too large cities in the US, see Baltimore for example for even more volatility.) I could forsee someone wanting to make a weighted synth estimator for rates, either make the estimator a population weighted average, or penalize the variance for small rates. Maybe you can trick microsynth to do a pop weighted average out of the box (Robbins et al., 2017).


To discuss the Hogan results specifically, I suspect for example NYC being a control city with high weight in the Hogan paper, which superficially may seem good (both large cities on the east coast), actually isn’t a very good control area considering the differences in homicide trends (either rates or counts) over time. (I am also not so sure about describing NYC and New Orlean’s as “post-industrial” by Hogan (2022a) either. I mean this is true to the extent that all urban areas in the US are basically post-industrial, but they are not rust belt cities like Detroit.)

Here is for reference counts of homicides in Philly, Detroit, New Orleans, and NYC going back further in time:

NYC is such a crazy drop in the 90s, lets use the post 2000 data that KNS used to zoom in on the graph.

I think KNS are reasonable here to use 2000 as a cut point – it is more empirical based (post crime drop), in which you could argue the 90’s are a “structural break”, and that homicides settled down in most cities around 2000 (but still typically had a gradual decline). Given the strong national homicide trends though across cities (here is an example I use for class, superimposing Dallas/NYC/Chicago), I think using even back to the 60’s is easily defensible (moreso than limiting to post 2010).

It depends on how strict you want to be whether you consider these 3 cities “good” matches for the counts post 2010 in Hogan’s data. Detroit seems a good match on the levels and ok match on trends. NO is ok match on trends. NYC and NO balance each other in terms of matching levels, NYC has steeper declines though (even during the 2010-2014 period).

The last graph though shows where the estimated increases from Hogan (2022a) come from. Philly went up and those 3 other cities went down from 2015-2018 (and had small upward bumps in 2019).

Final point in this section, careful what you wish for with sparse weights and sum to 1 in the synth estimate. What this means in practice when using counts and matching on pop size, is that you need lines that are above and below Philly on those dimensions. So to get a good match on Pop, it needs to select at least one of NYC/LA/Houston (Chicago was eliminated due to having a progressive prosecutor). To get a good match on homicide counts, it also has to pick at least one city with more homicides per year as well, which limits the options to New York and Detroit (LA/Houston have lower overall homicide counts to Philly).

You can’t do the default Abadie approach for NYC for example (matching on counts and pop) – it will always have a bad fit when using comparison cities in the US as the donor pool. You either need to allow the weights to sum to larger than 1, or the lasso approach with an intercept is another option (so you only match on trend, not levels).

Because matching on trends is what matters for proper identification in this design, not levels, this is all sorts of problematic with the data at hand. (This is also a potential problem with the KNS estimator as well. KNS note though they don’t trust their estimate offhand, their reasonable point is that small changes in the design result in totally different inferences.)

Covariates and Out of Sample Estimates

For sake of argument, say I said Hogan (2022a) is bunk, because it did not match on “per-capita annual number of cheese-steaks consumed”. Even though on its face this covariate is non-sense, how do you know it is non-sense? In the synthetic control approach, there is no empirical, falsifiable way to know whether an covariate is a correct one to match on. There is no way to know that median income is better than cheese-steaks.

If you wish for more relevant examples, Philly has obviously more issues with street consumption of opioids than Detroit/NOLA/NYC, which others have shown relationships to homicide and has been getting worse over the time Krasner has been in office (Rosenfeld et al., 2023). (Or more simply social disorganization is the more common way that criminologists think about demographic trends and crime.)

This uncertainty in “what demographics to control for” is ok though, because matching on covariates is neither necessary nor sufficient to ensure you have estimated a good counter-factual trend. Abadie in his writings intended for covariates to be more like fuzzy guide-rails – they are qualitative things that you think the comparison areas should be similar on.

Because there are effectively an infinite pool of potential covariates to match on, I prefer the approach of simply limiting the donor pool apriori – Hogan limiting to large cities is on its face reasonable. Including other covariates is not necessary, and does not make the synth estimate more or less robust. Whether KNS used good or bad data for covariates is entirely a red-herring as to the quality of the final synth estimate.


Side note: I don’t doubt that Hogan got advice to not share data and code. It is certainly not normative in criminology to do this. It creates a bizarre situation though, in which someone can try to replicate Hogan by collating original sources, and then Hogan always comes back and says “no, the data you have are wrong” or “the approach you did is not exactly replicating my work”.

I get that collating data takes a long time, and people want to protect their ability to publish in the future. (Or maybe just limit their exposure to their work being criticized.) It is blatantly antithetical to verifying the scientific integrity of peoples work though.

Even if Hogan is correct though in the covariates that KNS used are wrong, it is mostly immaterial to the quality of the synth estimates. It is a waste of time for outside researchers to even bother to replicate Hogan’s covariates he used.


So I used the idea of empirical/falsifiable – can anything associated with synth be falsifiable? Why yes it can – the typical approach is to do some type of leave-one-out estimate. It may seem odd because synth estimates an underlying match to a temporal trend in the treated location, but there is nothing temporal about the synth estimate. You could jumble up the years in the pre-treatment sample and still would estimate the same weights.

Because of this, you can leave-a-year-out in the pre-treatment time period, run your synth algorithm, and then predict that left out year. A good synth estimator will be close to the observed value for the out of sample estimates in the pre-treated time period (and as a side bonus, you can use that variance estimate to estimate the error in the post-trend years).

That is a relatively simple way to determine if the Hogan 5 year vs KNS 15 year time periods are “better” synth controls (my money is on KNS for that one). Because Hogan has not released data/code, I am not going to go through that trouble. As I said in the side note earlier, I could try to do that, and Hogan could simply come back and say “you didn’t do it right”.

This also would settle the issue of “over-fit”. You actually cannot just look at the synth weights, and say that if they are sparse they are not over-fit and if not sparse are over-fit. So for reference, you have in Hogan essentially fitting 82 weights based on 5 datapoints, and he identified a fit with 3 non-zero weights. Flip this around, and say I had 5 data points and fit a model with 3 parameters, it is easily possible that the 3 parameter model in that scenario is overfit.

Simultaneously, it is not necessary to have a sparse weights matrix. Several alternative methods to estimate synth will not have sparse weights (I am pretty sure Xu (2017) will not have sparse weights, and microsynth estimates are not sparse either for just two examples). Because US cities have such clear national level trends, a good estimator in this scenario may have many tiny weights (where good here is low bias and variance out of sample). Abadie thinks sparse weights are good to make the model more interpretable (and prevent poor extrapolation), but that doesn’t mean by default a not sparse solution is bad.

To be clear, KNS admit that their alternative results are maybe not trustworthy due to not sparse weights, but this doesn’t imply Hogan’s original estimates are themselves “OK”. I think maybe a correct approach with city level homicide rate data will have non-sparse weights, due to the national level homicide trend that is common across many cities.

Wrapping Up

If Crim and Public Policy still did response pieces maybe I would go through that trouble of doing the cross validation and making a different estimator (although I would unlikely be an invited commenter). But wanted to at least do this write up, as like I said at the start I think you could do this type of critique with the majority of synth papers in criminology being published at the moment.

To just give my generic (hopefully practical) advice to future crim work:

  • don’t worry about matching on covariates, worry about having a long pre-period
  • the default methods you need to worry about if you have enough “comparable” units – this is in terms of levels, not just trends
  • the only way to know the quality of the modeling procedure in synth is to do out of sample estimates.

Bullet points 2/3 are perhaps not practical – most criminologists won’t have the capability to modify the optimization procedure to the situation at hand (I spent a few days trying without much luck to do my penalized variants suggested, sharing so others can try out themselves, I need to move onto other projects!) Also takes a bit of custom coding to do the out of sample estimates.

For many realistic situations though, I think criminologists need to go beyond just point and clicking in software, especially for this overdetermined system of equations synthetic control scenario. I did a prior blog post on how I think many state level synth designs are effectively underpowered (and suggested using lasso estimates with conformal intervals). I think that is a better default in this scenario as well compared to the typical synth estimators, although you have plenty of choices.

Again I had initially written this as trying to two side the argument, and not being for or against either set of researchers. But sitting down and really reading all the sources and arguments, KNS are correct in their critique. Hogan is essentially hiding behind not releasing data and code, and in that scenario can make an endless set of (ultimately trivial) responses of anyone who publishes a replication/critique.

Even if some of the the numbers KNS collated are wrong, it does not make Hogan’s estimates right.

References