Large Language Models for Mortals book

I have published a new book, Large Language Models for Mortals: A Practical Guide for Analysts with Python. The book is available to purchase in my store, either as a paperback (for $59.99) or an epub (for $49.99).

The book is a tutorial on using python with all the major LLM foundation model providers (OpenAI, Anthropic, Google, and AWS Bedrock). The book goes through the basics of API calls, structured outputs, RAG applications, and tool-calling/MCP/agents. The book also has a chapter on LLM coding tools, with example walk throughs for GitHub Copilot, Claude Code (including how to set it up via AWS Bedrock), and Google’s Antigravity editor. (It also has a few examples of local models, which you can see Chapter 2 I discuss them before going onto the APIs in Chapter 3).

You can review the first 60 some pages (PDF link here if on Iphone).

While many of the examples in the book are criminology focused, such as extracting out crime elements from incident narratives, or summarizing time series charts, the lessons are more general and are relevant to anyone looking to learn the LLM APIs. I say “analyst” in the title, but this is really relevant to:

  • traditional data scientists looking to expand into LLM applications
  • PhD students (in all fields) who would like to use LLM applications in their work
  • analysts looking to process large amounts of unstructured textual data

Basically anyone who wants to build or create LLM applications, this is the book to help you get started.

I wrote this book partially out of fear – the rapid pace of LLM development has really upended my work as a data scientist. It is really becoming the most important set of skills (moreso than traditional predictive machine learning) in just the past year or two. This book is the one I wish I had several years ago, and will give analysts a firm grounding in using LLMs in realistic applications.

Again, the book is available in:

For purchase worldwide. Here are all the sections in the book – whether you are an AWS or Google shop, or want to learn the different database alternatives for RAG, or want more self contained examples of agents with python code examples for OpenAI, Anthropic, or Google, this should be a resource you highly consider purchasing.

To come are several more blog posts in the near future, how I set up Claude Code to help me write (and not sound like a robot). How to use conformal inference and logprobs to set false positive rates for classification with LLM models, and some pain points with compiling a Quarto book with stochastic outputs (and points of varying reliability for each of the models).

But for now, just go and purchase the book!


Below is the table of contents to review – it is over 350 pages for the print version (in letter paper), over 250 python code snippets and over 80 screenshots.

Large Language Models for Mortals: A Practical Guide for Analysts with Python
by Andrew Wheeler
TABLE OF CONTENTS
Preface
Are LLMs worth all the hype?
Is this book more AI Slop?
Who this book is for
Why write this book?
What this book covers
What this book is not
My background
Materials for the book
Feedback on the book
Thank you
1 Basics of Large Language Models
1.1 What is a language model?
1.2 A simple language model in PyTorch
1.3 Defining the neural network
1.4 Training the model
1.5 Testing the model
1.6 Recapping what we just built
2 Running Local Models from Hugging Face
2.1 Installing required libraries
2.2 Downloading and using Hugging Face models
2.3 Generating embeddings with sentence transformers
2.4 Named entity recognition with GLiNER
2.5 Text Generation
2.6 Practical limitations of local models
3 Calling External APIs
3.1 GUI applications vs API access
3.2 Major API providers
3.3 Calling the OpenAI API
3.4 Controlling the Output via Temperature
3.5 Reasoning
3.6 Multi-turn conversations
3.7 Understanding the internals of responses
3.8 Embeddings
3.9 Inputting different file types
3.10 Different providers, same API
3.11 Calling the Anthropic API
3.12 Using extended thinking with Claude
3.13 Inputting Documents and Citations
3.14 Calling the Google Gemini API
3.15 Long Context with Gemini
3.16 Grounding in Google Maps
3.17 Audio Diarization
3.18 Video Understanding
3.19 Calling the AWS Bedrock API
3.20 Calculating costs
4 Structured Output Generation
4.1 Prompt Engineering
4.2 OpenAI with JSON parsing
4.3 Assistant Messages and Stop Sequences
4.4 Ensuring Schema Matching Using Pydantic
4.5 Batch Processing For Structured Data Extraction using OpenAI
4.6 Anthropic Batch API
4.7 Google Gemini Batch
4.8 AWS Bedrock Batch Inference
4.9 Testing
4.10 Confidence in Classification using LogProbs
4.11 Alternative inputs and outputs using XML and YAML
4.12 Structured Workflows with Structured Outputs
5 Retrieval-Augmented Generation (RAG)
5.1 Understanding embeddings
5.2 Generating Embeddings using OpenAI
5.3 Example Calculating Cosine similarity and L2 distance
5.4 Building a simple RAG system
5.5 Re-ranking for improved results
5.6 Semantic vs Keyword Search
5.7 In-memory vector stores
5.8 Persistent vector databases
5.9 Chunking text from PDFs
5.10 Semantic Chunking
5.11 OpenAI Vector Store
5.12 AWS S3 Vectors
5.13 Gemini and BigQuery SQL with Vectors
5.14 Evaluating retrieval quality
5.15 Do you need RAG at all?
6 Tool Calling, Model Context Protocol (MCP), and Agents
6.1 Understanding tool calling
6.2 Tool calling with OpenAI
6.3 Multiple tools and complex workflows
6.4 Tool calling with Gemini
6.5 Returning images from tools
6.6 Using the Google Maps tool
6.7 Tool calling with Anthropic
6.8 Error handling and model retry
6.9 Tool Calling with AWS Bedrock
6.10 Introduction to Model Context Protocol (MCP)
6.11 Connecting Claude Desktop to MCP servers
6.12 Examples of Using the Crime Analysis Server in Claude Desktop
6.13 What are Agents anyway?
6.14 Using Multiple Tools with the OpenAI Agents SDK
6.15 Composing and Sequencing Agents with the Google Agents SDK
6.16 MCP and file searching using the Claude Agents SDK
6.17 LLM as a Judge
7 Coding Tools and AI-Assisted Development
7.1 Keeping it real with vibe coding
7.2 VS Code and GitHub Install
7.3 GitHub Copilot
7.4 Claude Code Setup
7.5 Configuring API access
7.6 Using Claude Code to Edit Files
7.7 Project context with CLAUDE.md
7.8 Using an MCP Server
7.9 Custom Commands and Skills
7.10 Session Management
7.11 Hooks for Testing
7.12 Claude Headless Mode
7.13 Google Antigravity
7.14 Best practices for AI-assisted coding
8 Where to next?
8.1 Staying current
8.2 What to learn next?
8.3 Forecasting the near future of foundation models
8.4 Final thoughts

Part time product design positions to help with AI companies

Recently on the Crime Analysis sub-reddit an individual posted about working with an AI product company developing a tool for detectives or investigators.

The Mercor platform has many opportunities that may be of interest to my network, so I am sharing them here. These include not only for investigators, but GIS analysts, writers, community health workers, etc. (The eligibility interviewers I think if you had any job in gov services would likely qualify, it is just reviewing questions.)

All are part time (minimum of 15 hours per week), remote, and can be in the US, Canada, or UK. (But cannot support H1-B or OPT visas in the US).

Additional for professionals looking to get into the tech job market, see these two resources:

I actually just hired my first employee at Crime De-Coder. Always feel free to reach out if you think you would be a good fit for the types of applications I am working on (python, GIS, crime analysis experience). I will put you in the list to reach out to when new opportunities are available.


Detectives and Criminal Investigators

Referral Link

$65-$115 hourly

Mercor is recruiting Detectives and Criminal Investigators to work on a research project for one of the world’s top AI companies. This project involves using your professional experience to design questions related to your occupation as a Detective and Criminal Investigator. Applicants must:

  • Have 4+ years full-time work experience in this occupation;
  • Be based in the US, UK, or Canada
  • minimum of 15 hours per week

Community Health Workers

Referral Link

$60-$80 hourly

Mercor is recruiting Community Health Workers to work on a research project for one of the world’s top AI companies. This project involves using your professional experience to design questions related to your occupation as a Community Health Worker. Applicants must:

  • Have 4+ years full-time work experience in this occupation;
  • Be based in the US, UK, or Canada
  • minimum 15 hours per week

Writers and Authors

Referral Link

$60-$95 hourly

Mercor is recruiting Writers and Authors to work on a research project for one of the world’s top AI companies. This project involves using your professional experience to design questions related to your occupation as a Writer and Author.

Applicants must:

  • Have 4+ years full-time work experience in this occupation;
  • Be based in the US, UK, or Canada
  • minimum 15 hours per week

Eligibility Interviewers, Government Programs

Referral Link

$60-$80 hourly

Mercor is recruiting Eligibility Interviewers, Government Programs to work on a research project for one of the world’s top AI companies. This project involves using your professional experience to design questions related to your occupation as a Eligibility Interviewers, Government Program. Applicants must:

  • Have 4+ years full-time work experience in this occupation;
  • Be based in the US, UK, or Canada
  • minimum 15 hours per week

Cartographers and Photogrammetrists

Referral Link

$60-$105 hourly

Mercor is recruiting Cartographers and Photogrammetrists to work on a research project for one of the world’s top AI companies. This project involves using your professional experience to design questions related to your occupation as a Cartographer and Photogrammetrist. Applicants must:

  • Have 4+ years full-time work experience in this occupation;
  • Be based in the US, UK, or Canada
  • minimum 15 hours per week

Geoscientists, Except Hydrologists and Geographers

$85-$100 hourly

Referral Link

Mercor is recruiting Geoscientists, Except Hydrologists and Geographers to work on a research project for one of the world’s top AI companies. This project involves using your professional experience to design questions related to your occupation as a Geoscientists, Except Hydrologists and Geographers Applicants must:

  • Have 4+ years full-time work experience in this occupation;
  • Be based in the US, UK, or Canada
  • minimum of 15 hours per week

Year in Review 2025 and AI Predictions

For a brief year in review, total views for the two different websites have decreased in the past year. For this blog, I am going to be a few thousand shy of 100,000 views. (2023 I had over 150k views, and 2024 I had over 140k views.) For the Crime De-Coder site, I am going to only get around 15k views.

Part of it is I posted less, this will be the 21st blog post this year on the personal blog (2023 had 46 and 2024 had 32 posts). The Crime De-Coder site had 12 blog posts, so pretty consistent with the prior year. Both are pretty bursty, with large bouts of traffic coming from if I post something to Hacker News I can get 1k to 10k views in a day or two if it makes it to the front page. So the 2024 stats for the crime de-coder was a few of those Hacker News bumps I did not get in 2025.

Some of it could legitimately be traditional Google search being usurped by the gen AI tools. This is the first year I had appreciable referrals from chatgpt, but they are less than 1000. The other tools are trivial amount of referrals. If I worried about SEO more, I would have more updating/regular content (as old pages are devalued quite a bit by google, and it seems to be getting more severe over time).

I have upped my use of the free tools quite a bit. ChatGPT knows me pretty well, and I use Claude Desktop almost every day as well.

An IAM policy scroll is more of a nightmare, and I definitely ask more python questions than R, but the cartoon desk is pretty close to spot on. I am close to paying for Anthropic subscription for Claude code credits (currently use pay as I go via Bedrock, and this is the first month I went over $20).

What pages on the blog are popular I can never be sure of. My most popular post last year was Downloading Police Employment Trends from the FBI Data Explorer. A 2023 post, that had random times where it would have several hundred visits in a short hour span. (Some bot collecting sites? I do not know.) If it is actual people, you would want to check out my Sworn Dashboard site, where you can look at trends for PDs much easier than downloading all the data yourself!

One thing that has grown though, I do short form posting on LinkedIn on my crime de-coder page. Impressions total for the year is over 340k (see the graph), and I currently am a few shy of 4400 followers.

LinkedIn is nice because it can be slightly longer form than the other social media sites. I would suggest you follow me there (in addition to signing up for RSS feeds for the two sites). That is the easiest way to follow my work.

I also took over as a moderator of the Crime Analysis Reddit forum, it is better than the IACA forums in my opinion, so encourage folks to post there for crime analysis questions.

Crime De-Coder Work

Crime De-Coder work has been steady (but not increasing). Similar to last year had several consulting gigs conducting crime analysis for premises liability cases (and one other case I may share my opinions once it is over), and doing some small projects with non-profits and police departments.

One big project was a python training in Austin.

The Python Book (which I also translated to Spanish/French), had a trickle of new sales. 2024 had around 100 sales and 2025 had around 50 sales. It is close to 2/3 print sales and 1/3 epub, so definately folks should have physical prints if you are selling books still.

Doing trainings basically makes writing the book worth it, but I do hope eventually the book makes it way into grad school curriculum’s. (Only one course so far.) I have pitched to grad schools to have me run a similar bootcamp to what I do for crime analysts, so if interested let me know.

The biggest new thing was Crime De-Coder got an Arnold Grant. Working with Denver PD on an experiment to evaluate a chronic offender initiative.

At the Day Gig

At my day gig, I was officially promoted to a senior manager and then quickly to a director position. Hence you get posts like what to show in your tech resume and notes on project management.

One of the reasons I am big on python – it is the dominant programming language in data science. It is hard for me to recruit from my network, as majority of individuals just know a little R (if you were a hard core R person, had packages/well executed public repo’s, I could more easily think you will be able to migrate to python to work on my team).

So learn python if you want to be a data scientist is my advice (and see other job market advice at my archived newsletter).

AI Predictions

At the day gig, my work went from 100% traditional supervised machine learning models to more like 50/50 traditional vs generative AI applications. The genAI hype is real, but I think it is worthwhile putting my thoughts to paper.

The biggest question is will AI take all of our jobs? I think a more likely end scenario is the AI tools just become better at helping humans do tasks. The leap from helping a human do something faster vs an AI tool doing it 100% on its own with 0 human input is hard. The models are getting incrementally better, but I think to fully replace people in a substantive way will require another big advancement in fundamental capabilities. Making a human 10x more productive is easier and still will make the AI companies a ton of money.

Sometimes people view the 10x idea and say that will take jobs, just not 100% of jobs. That is a view though that there is only a finite amount of work to be done. That assumption is clearly not true, and being able to do work faster/cheaper just induces demand for more potential work. The example with calculators making more banking jobs, not less, is basically the same example.

One of the critiques of the current systems is they are overvalued, so we are in a bubble. I do not remember where I read it, but one estimate was if everyone in the US spent $1 a day on the different AI tools, that would justify the current valuations for OpenAI, Anthropic, NVIDIA, etc. I think that is totally doable, we spend a few thousand a workday at Gainwell on the foundation models for example for a few projects, and we are just going to continue to roll out more and more. Gainwell is a company with around 6k employees for reference, and our current AI applications touch way less than 1k of those employees. We have plenty of room to grow those applications.

It is super hard though to build systems to help people do things faster. And we are talking like “this thing that used to take 30 minutes now takes 15 minutes”. If you have 100 people doing that thing all the time though, the costs of the models are low enough it is an easy win.

And this mostly only holds true for knowledge economy work that can be all done via software. There just still needs to be fundamental improvements to robotics to be able to do physical things. The tailor’s job is safe for the foreseeable future.

The change in the data science landscape to more generative AI applications definitely requires social scientists and analysts to up their game though to learn a new set of tools. I do have another book in the works to address that, so hopefully you will see that early next year.

Advice for crime analyst to break into data science

I recently received a question about a crime analyst looking to break into data science. Figured it would be a good topic for my advice in a blog post. I have written many resources over the years targeting recent PhDs, but the advice for crime analysts is not all that different. You need to pick up some programming, and likely some more advanced tech skills.

For background, the individual had SQL + Excel skills (which many analysts may just have Excel). Vast majority of analyst roles, you should be quite adept at SQL. But just SQL is not sufficient for even an entry level data science role.


For entry data science, you will need to demonstrate competency in at least one programming language. The majority of positions will want you to have python skills. (I wrote an entry level python book exactly for someone in your position.)

You likely will also need to demonstrate competency in some machine learning or using large language models for data science roles. It used to be Andrew Ng’s courses were the best recommendation (I see he has a spin off DeepLearningAI now). So that is second hand though, I have not personally taken them. LLMs are more popular now, so prioritizing learning how to call those APIs, build RAG systems, prompt engineering I think is going to make you slightly more marketable than traditional machine learning.

I have personally never hired anyone in a data science role without a masters. That said, I would not have a problem if you had a good portfolio. (Nice website, Github contributions, etc.)

You should likely start just looking and applying to “analyst” roles now. Don’t worry about if they ask for programming you do not have experience in, just apply. Many roles the posting is clearly wrong or totally unrealistic expectations.

Larger companies, analyst roles can have a better career ladder, so you may just decide to stay in that role. If not, can continue additional learning opportunities to pursue a data science career.

Remote is more difficult than in person, but I would start by identifying companies that are crime analysis adjacent (Lexis Nexis, ESRI, Axon) and start applying to current open analyst positions.

For additional resources I have written over the years:

The alt-ac newsletter has various programming and job search tips. THe 2023 blog post goes through different positions (if you want, it may be easier to break into project management than data science, you have a good background to get senior analyst positions though), and the 2025 blog post goes over how to have a portfolio of work.

Cover page, data science for crime analysis with python

Why I like working at Gainwell

There is an opening for a principal data science position on my team at Gainwell. I was promoted to a director role a few months ago (I was an individual contributor for 4+ years), so this position will directly report to me.

There are of course mixed reviews if you google Gainwell Technologies what it is like to work there. Gainwell is a big organization (maybe over 6,000 employees at this point). Any big org it can be totally different if you work for a different team. So I can describe what it is like to work for me and on my team.

I have been at HMS (which then merged with Gainwell) since the end of 2019. The way I describe what Gainwell does, we administer Medicaid claims for many states, but most of what my team does is support efforts to fix bad Medicaid claims (more commonly referred to as Fraud/Waste/Abuse). Fix means for the most part the bill was wrong (too high or for the wrong services), or the bill should have gone to a different person (seems trivial, but billions are sent to Medicaid that should be sent to a commercial carrier or car insurance in the US per year).

I started as a data scientist, went up through a ranks with a promotion every few years (our levels are JR->Advisor->Principal->Sr Manager->Director). For a list of projects folks on my team are working on:

  • structured labeling for medical records to help with audits
  • supervised machine learning models to help make prior authorizations go faster
  • structured extraction for many types of documents to auto-verify the contents
  • improving supervised machine learning models in production to improve identification of third parties liable for claims
  • building tools to auto-identify potential Fraud/Waste/Abuse in large databases of claims
  • making our master name index record linkage project run much faster and have more matches

This specific role will be to help with the last bullet. So it is a mix of the more recent fad of GenAI tools, but also we do a bunch of traditional machine learning as well. This role I want folks to know at least two {SQL,Machine Learning,Python}.

We are mostly just solution architects, using python to glue together different processes to make smart decisions across the org. Gainwell is a very federated and older company built up from acquisitions over the years – my team is one of the few that works across the org with many different teams. While many people at Gainwell have a data science title, my team is the AIML (artificial intelligence and machine learning) team at Gainwell.

This work is important, just my team is associated with models that help save states 9 figures on Medicaid claims per year (I imagine it is well over 10 figures across the whole org). For a bit more about the team:

  • all remote, we have a mix across the continental US. Most of Gainwell is in central, so often the schedule follows central time for meetings (think many stand-ups at 9:30-10am eastern)
  • my team has a bunch of smaller groups working on individual projects (think 2 or 3 max assigned to work on these projects). This position will be number 8 under me.
  • We are always tied to revenue when we take on a project (think generally if we cannot justify at minimum 1e6 in savings or increased revenue, we will not take on that project). This is important, as we are not just treated like IT filling tickets, we are deeply integrated with the business teams we are delivering solutions for.
  • You do not have to make sales, we are building things internally (think your clients are different teams inside of Gainwell). We have more work than we can do.
  • We are committed to building things fast. Those first 4 bullets are things we built in the past 6 months and are already generating millions of dollars of revenue.

So we are working with legacy on-premises Hadoop systems, or Databricks, or AWS deployments calling Bedrock or openai, or traditional machine learning – we just get a new project and figure it out. Do not take this as you need to know everything (if you put some random tech on your resume like Docker or time series analysis, be prepared to answer a random question about it when I interview you!) But I do want smart folks who can learn and adapt to the situation at hand. Really the only consistent tech across projects is python and SQL.

For work life balance:

  • I have worked on the weekend a total of 2 times in my tenure at Gainwell that I can remember (and those were less than 1 hour)
  • Because we are remote, we are uber flexible with scheduling (need to drop off or pick up your kid from school during the day, can very likely work around that schedule). Basically all I care about is you get your work done, and individual contributors will just need to figure out a few regular meetings (like stand ups or other meetings with the business group you are working with)
  • Flex PTO (basically only need permission if you are taking more than a week off)

I shared this on LinkedIn and I slightly dread it (I will get a million messages) – but readers of this blog are different. If you think you may be interested in private sector, feel free to reach out. (Lets just get criminologists to take over, Gio now also has his own team at Gainwell.)

Since this is a principal position, recent grads will be tougher to advocate for, but I can give feedback what I am looking for (and it is good for you to be on my radar for future positions). If you are a superstar and the salary is not enough, reach out and we can have a one on one chat.

The main thing is most of my network uses R, and not python. Basically the way I view it is a weighted scale. If you are really good at R (have public code examples and packages), I can be more confident you can learn python. If you knew python, and just had the typical replication code sets in python (a much lower bar). No excuses to learn python, I wrote a book to help with that.

What to show in your tech resume?

Jason Brinkley on LinkedIn the other day had a comment on the common look of resumes – I disagree with his point in part but it is worth a blog post to say why:

So first, when giving advice I try to be clear about what I think are just my idiosyncratic positions vs advice that I feel is likely to generalize. So when I say, you should apply to many positions, because your probability of landing a single position is small, that is quite general advice. But here, I have personal opinions about what I want to see in a resume, but I do not really know what others want to see. Resumes, when cold applying, probably have to go through at least two layers (HR/recruiter and the hiring manager), who each will need different things.

People who have different colored resumes, or in different formats (sometimes have a sidebar) I do not remember at all. I only care about the content. So what do I want to see in your resume? (I am interviewing for mostly data scientist positions.) I want to see some type of external verification you actually know how to code. Talk is cheap, it is easy to list “I know these 20 python libraries” or “I saved our company 1 million buckaroos”.

So things I personally like seeing in a resume are:

  • code on github that is not a homework assignment (it is OK if unfinished)
  • technical blog posts
  • your thesis! (or other papers you were first/solo author)

Very few people have these things, so if you do and you land in my stack, you are already at the like 95th percentile (if not higher) for resumes I review for jobs.

The reason having outside verification you actually know what you are doing is because people are liars. For our tech round, our first question is “write a python hello world program and execute it from the command line” – around half of the people we interview fail this test. These are all people who list they are experts in machine learning, large language models, years of experience in python, etc.

My resume is excessive, but I try to practice what I preach (HTML version, PDF version)

I added some color, but have had recruiters ask me to take it off the resume before. So how many people actually click all those links when I apply to positions? Probably few if any – but that is personally what I want to see.

There are really only two pieces of advice I have seen repeatedly about resumes that I think are reasonable, but it is advice not a hard rule:

  • I have had recruiters ask for specific libraries/technologies at the top of the resume
  • Many people want to hear about results for project experience, not “I used library X”

So while I dislike the glut of people listing 20 libraries, I understand it from the point of a recruiter – they have no clue, so are just trying to match the tech skills as best they can. (The matching at this stage I feel may be worse than random, in that liars are incentivized, hence my insistence on showing actual skills in some capacity.) It is infuriating when you have a recruiter not understand some idiosyncratic piece of tech is totally exchangeable with what you did, or that it is trivial to learn on the job given your prior experience, but that is not going to go away anytime soon.

I’d note at Gainwell we have no ATS or HR filtering like this (the only filtering is for geographic location and citizenship status). I actually would rather see technical blog posts or personal github code than saying “I saved the company 1 million dollars” in many circumstances, as that is just as likely to be embellished as the technical skills. Less technical hiring managers though it is probably a good idea to translate technical specs to more plain business implications though.

I translated my book for $7 using openai

The other day an officer from the French Gendarmerie commented that they use my python for crime analysis book. I asked that individual, and he stated they all speak English. But given my book is written in plain text markdown and compiled using Quarto, it is not that difficult to pipe the text through a tool to translate it to other languages. (Knowing that epubs under the hood are just html, it would not suprise me if there is some epub reader that can use google translate.)

So you can see now I have available in the Crime De-Coder store four new books:

ebook versions are normally $39.99, and print is $49.99 (both available worldwide). For the next few weeks, can use promo code translate25 (until 11/15/2025) to purchase epub versions for $19.99.

If you want to see a preview of the books first two chapters, here are the PDFs:

And here I added a page on my crimede-coder site with testimonials.

As the title says, this in the end cost (less than) $7 to convert to French (and ditto to convert to Spanish).

Here is code demo’ing the conversion. It uses OpenAI’s GPT-5 model, but likely smaller and cheaper models would work just fine if you did not want to fork out $7. It ended up being a quite simple afternoon project (parsing the markdown ended up being the bigger pain).

So the markdown for the book in plain text looks like this:

It ends up that because markdown uses line breaks to denote different sections, that ends up being a fairly natural break to do the translation. These GenAI tools cannot repeat back very long sequences, but a paragraph is a good length. Long enough to have additional context, but short enough for the machine to not go off the rails when trying to just return the text you input. Then I just have extra logic to not parse code sections (that start/end with three backticks). I don’t even bother to parse out the other sections (like LaTeX or HTML), and I just include in the prompt to not modify those.

So I just read in the quarto document, split by “”, then feed in the text sections into OpenAI. I did not test this very much, just use the current default gpt-5 model with medium reasoning. (It is quite possible a non-reasoning smaller model will do just as well. I suspect the open models will do fine.)

You will ultimately still want someone to spot check the results, and then do some light edits. For example, here is the French version when I am talking about running code in the REPL, first in English:

Running in the REPL

Now, we are going to run an interactive python session, sometimes people call this the REPL, read-eval-print-loop. Simply type python in the command prompt and hit enter. You will then be greeted with this screen, and you will be inside of a python session.

And then in French:

Exécution dans le REPL

Maintenant, nous allons lancer une session Python interactive, que certains appellent le REPL, boucle lire-évaluer-afficher. Tapez simplement python dans l’invite de commande et appuyez sur Entrée. Vous verrez alors cet écran et vous serez dans une session Python.

So the acronym is carried forward, but the description of the acronym is not. (And I went and edited that for the versions on my website.) But look at this section in the intro talking about GIS:

There are situations when paid for tools are appropriate as well. Statistical programs like SPSS and SAS do not store their entire dataset in memory, so can be very convenient for some large data tasks. ESRI’s GIS (Geographic Information System) tools can be more convenient for specific mapping tasks (such as calculating network distances or geocoding) than many of the open source solutions. (And ESRI’s tools you can automate by using python code as well, so it is not mutually exclusive.) But that being said, I can leverage python for nearly 100% of my day to day tasks. This is especially important for public sector crime analysts, as you may not have a budget to purchase closed source programs. Python is 100% free and open source.

And here in French:

Il existe également des situations où les outils payants sont appropriés. Les logiciels statistiques comme SPSS et SAS ne stockent pas l’intégralité de leur jeu de données en mémoire, ils peuvent donc être très pratiques pour certaines tâches impliquant de grands volumes de données. Les outils SIG d’ESRI (Système d’information géographique) peuvent être plus pratiques que de nombreuses solutions open source pour des tâches cartographiques spécifiques (comme le calcul des distances sur un réseau ou le géocodage). (Et les outils d’ESRI peuvent également être automatisés à l’aide de code Python, ce qui n’est pas mutuellement exclusif.) Cela dit, je peux m’appuyer sur Python pour près de 100 % de mes tâches quotidiennes. C’est particulièrement important pour les analystes de la criminalité du secteur public, car vous n’avez peut‑être pas de budget pour acheter des logiciels propriétaires. Python est 100 % gratuit et open source.

So it translated GIS to SIG in French (Système d’information géographique). Which seems quite reasonable to me.

I paid an individual to review the Spanish translation (if any readers are interested to give me a quote for the French version copy-edits, would appreciate it). She stated it is overall very readable, but just has many minor things. Here is a a sample of suggestions:

Total number of edits she suggested were 77 (out of 310 pages).

If you are interested in another language just let me know. I am not sure about translation for the Asian languages, but I imagine it works OK out of the box for most languages that are derivative of Latin. Another benefit of self-publishing, I can just have the French version available now, but if I am able to find someone to help with the copy-edits I will just update the draft after I get their feedback.

Recording your mouse and keyboard with python

The differently LLM providers have released computer use tools, Google’s Gemini being one of the most recent ones. They way these work, it submits and image, and then does tasks given general instruction, actually manipulating the mouse and keyboard, asking for human in the loop input at various stages, etc. They seem cool but I am lacking clear examples where I would use them.

They really make sense for very complicated workflows that vary. The things that make the most sense to spend some time trying to automate though are boring things – things that have a specific set of inputs and outputs, and the machine can entirely make those route decisions based on rules you define at the onset. When I was a crime analyst, I worked with all sorts of janky and old desktop software that I would need to generate reports and email those results to key individuals for example.

Here as a day project, I created a set of python functions to record your keyboard and mouse inputs, and then replay those inputs. The python code is here, it ends up being pretty simple (mostly written by ChatGPT and slightly modified by me). And here is a YouTube video demonstration showing what you can do.

So even if you have software that does not have an easy way to integrate with other code, here you can just record your movements and replay them with python on a schedule to accomplish that monotonous task.

I scraped the Crime solutions site

Before I get to the main gist, I am going to talk about another site. The National Institute of Justice (NIJ) paid RTI over $10 million dollars to develop a forensic technology center of excellence over the past 5 years. While this effort involved more than just a website, the only thing that lives in perpetuity for others to learn from the center of excellence are the resources they provide on the website.

Once funding was pulled, this is what RTI did with those resources:

The website is not even up anymore (it is probably a good domain to snatch up if no one owns it anymore), but you can see what it looked like on the internet archive. It likely had over 1000+ videos and pages of material.

I have many friends at RTI. It is hard for me to articulate how distasteful I find this. I understand RTI is upset with the federal government cuts, but to just simply leave the website up is a minimal cost (and likely worth it to RTI just for the SEO links to other RTI work).

Imagine you paid someone $1 million dollars for something. They build it, and then later say “for $1 million more, I can do more”. You say ok, then after you have dispersed $500,000 you say “I am not going to spend more”. In response, the creator destroys all the material. This is what RTI did, except it was they had been paid $11 million and they were still to be paid another $1 million. Going forward, if anyone from NIJ is listening, government contracts to build external resources should be licensed in a way that prevents that from happening.

And this brings me to the current topic, CrimeSolutions.gov. It is a bit of a different scenario, as NIJ controls this website. But recently they cut funding to the program, which was administered by DSG.

Crime Solutions is a website where they have collected independent ratings of research on criminal justice topics. To date they have something like 800 ratings on the website. I have participated in quite a few, and I think these are high quality.

To prevent someone (for whatever reason) simply turning off the lights, I scraped the site and posted the results to github. It is a PHP site under the hood, but changing everything to run as a static HTML site did not work out too badly.

So for now, you can view the material at the original website. But if that goes down, you have a close to same functional site mirrored at https://apwheele.github.io/crime-solutions/index.html. So at least those 800 some reviews will not be lost.

What is the long term solution? I could be a butthead and tomorrow take down my github page (so clone it locally), so me scraping the site is not really a solution as much as a stopgap.

Ultimately we want a long term, public, storage solution that is not controlled by a single actor. The best solution we have now is ArDrive via the folks from Arweave. For a one time upfront purchase, Arweave guarantees the data will last a minimum of 200 years (they fund an endowment to continually pay for upkeep and storage costs). If you want to learn more, stay tuned, as me and Scott Jacques are working on migrating much of the CrimRXiv and CrimConsortium work to this more permanent solution.

Recommend reading The Idea Factory, Docker python tips

A friend recently recommended The Idea Factory: Bell Labs and the Great Age of American Innovation by Jon Gertner. It is one of the best books I have read in awhile, so also want to recommend to the readers of my blog.

I was vaguely familiar with Bell Labs given my interest in stats and computer science. John Tukey makes a few honorable mentions, but Claude Shannon is a central character of the book. What I did not realize is that almost all of modern computing can be traced back to innovations that were developed at Bell Labs. For a sample, these include:

  • the transistor
  • fiber optic cables (I did not even know, fiber is very thin strands of glass)
  • the cellular network with smaller towers
  • satellite communication

And then you get smattering of different discussions as well, such as the material science that goes into making underwater cables durable and shark resistant.

The backstory was that AT&T in the early 20th century had a monopoly on landline telephones. Similar now to how most states have a single electric provider – they were a private company but blessed by the government to have that monopoly. AT&T intentionally had a massive research arm that they used to improve communications, but also they provided that research back into the public coffers. Shannon was a pure mathematician, he was not under the gun to produce revenue.

Gertner basically goes through a series of characters that were instrumental in developing some of these ideas, and in creating and managing Bell Labs itself. It is a high level recounting of Gertner mostly from historical notebooks. One of the things I really want to understand is how institutions even tackle a project that lasts a decade – things I have been involved in at work that last a year are just dreadful due to transaction costs between so many groups. I can’t even imagine trying to keep on schedule for something so massive. So I do not get that level of detail from the book, just moreso someone had an idea, developed a little tinker proof of concept, and then Bell Labs sunk a decade an a small army of engineers to figure out how to build it in an economical way.

This is not a critique of Gertner (his writing is wonderful, and really gives flavor to the characters). Maybe just sinking an army of engineers on a problem is the only reasonable answer to my question.

Most of the innovation in my field, criminal justice, is coming from the private sector. I wonder (or maybe hope and dream is a better description) if a company, like Axon, could build something like that for our field.


Part of the point for writing blog posts is that I do the same tasks over and over again. Having a nerd journal is convenient to reference.

One of the things that I do not have to commonly do, but it seems like once a year at my gig, I need to putz around with Docker containers. For note for myself, when building python apps, to get the correct caching you want to install the libraries first, and then copy the app over.

So if you do this:

FROM python:3.11-slim
COPY . /app
RUN pip install --no-cache-dir -r /app/requirements.txt
CMD ["python", "main.py"]

Everytime you change a single line of code, you need to re-install all of the libraries. This is painful. (For folks who like uv, this does not solve the problem, as you still need to download the libraries everytime in this approach.)

A better workflow then is to copy over the single requirements.txt file (or .toml, whatever), install that, and then copy over your application.

FROM python:3.11-slim
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . /app
CMD ["python", "main.py"]

So now, only when I change the requirements.txt file will I need to redo that layer.

Now I am a terrible person to ask about dev builds and testing code in this set up. I doubt I am doing things the way I should be. But most of the time I am just building this.

docker build -t py_app .

And then I will have logic in main.py (or swap out with a test.py) that logs whatever I need to the screen. Then you can either do:

docker run --rm py_app

Or if you want to bash into the container, you can do:

docker run -it --rm py_app bash

Then from in the container you can go into the python REPL, edit a file using vim if you need to, etc.

Part of the reason I want data scientists to be full stack is because at work, if I need another team to help me build and test code, it basically adds 3 months at a minimum to my project. Probably one of the most complicated things myself and team have done at the day job is figure out the correct magical incantations to properly build ODBC connections to various databases in Docker containers. If you can learn about boosted models, you can learn how to build Docker containers.