Sentence embeddings and caching huggingface models in github actions

For updates on Crime De-Coder:

This is actually more born out from demand interacting with crime analysis groups – it doesn’t really make sense for me to swoop in and say “do X/Y/Z, here is my report”. My experience doing that with PDs is not good – my recommendations often do not get implemented. So it is important to upskill your current analysts to be able to conduct more rigorous analysis over time.

Some things it is better to have an external person do the report (such as program evaluation, cost-benefit analysis, or racial bias metrics). But things like generating hotspots and chronic offender lists, those should be something your crime analysis unit learns how to do.

You can of course hire me to do those things, but if you don’t train your local analysts to take over the job, you have to perpetually hire me to do that. Which may make sense for small departments without an internal crime analysis unit. But for large agencies you should be using your crime analysts to do reports like that.


This post, I also wanted to share some work on NLP. I have not been immune at work given all the craze. I am more impressed with vector embeddings and semantic search though than I am with GenAI applications. GenAI I think will be very useful, like a more advanced auto-complete, but I don’t think it will put us all out of work anytime soon.

Vector embeddings, for those not familiar, are taking a text input and turning it into numbers. You can then do nearness calculations with the numbers. So say you had plain text crime narrative notes on modus operandi, and wanted to search “breaking car window and stealing purse” – you don’t want to do an exact search on that phrase, but a search for text that is similar. So the vector similarity search may return a narrative that is “break truck back-window with pipe, take backpack”. Not exactly the same, but semantically very similar.

Sentencetransformers makes this supersimple (no need to use external APIs for this).

from sentence_transformers import SentenceTransformer, util
model = SentenceTransformer('all-MiniLM-L6-v2')

# Base query
query = ['breaking car window and stealing purse']

# Example MO
mo = ['break truck back window with pipe, steal backpack',
      'pushing in a window AC unit, steal TV and sneakers',
      'steal car from cloned fab, joyride']

#Compute embedding for both lists
qemb = model.encode(query, convert_to_tensor=True)
moemb = model.encode(mo, convert_to_tensor=True)

#Compute cosine-similarities
cosine_scores = util.cos_sim(moemb, qemb)
print(cosine_scores)

So you have your query, and often people just return the top-k results, since to know “how close is reasonably close” is difficult in practice (google pretty much always returns some results, even if they are off the mark!).

I am quite impressed with the general models for this (even very idiosyncratic documents and jargon it tends to do quite well). But if you want to embed specifically trained models on different tasks, I often turn to simpletransformers. Here is a model based on clinical case notes.

from simpletransformers.language_representation import RepresentationModel

model = RepresentationModel("bert", "medicalai/ClinicalBERT")
sent1 = ["Procedure X-ray of wrist"] # needs to be a list
wv1 = model.encode_sentences(sent1, combine_strategy="mean")

For another example, I shared a github repo, hugging_cache, where I did some work on how to cache huggingface models in a github actions script. This is useful for unit tests, so you don’t need to redownload the model everytime.

Github cache size for free accounts is 10 gigs (and you need the environment itself, which is a few gigs). So huggingface models up to say around 4 (maybe 5) gigs will work out OK in this workflow. So not quite up to par with the big llama models, but plenty large enough for these smaller embedding models.

It is not very difficult to build a local, in-memory vector search database. Which will be sufficient for many individuals. So a crime analyst could build a local search engine for use – redo vector DB on a batch job, then just have a tool/server to do the local lookup. (Or just use a local sqlite database is probably sufficient for a half dozen analysts/detectives may use this tool every now and then.)

Text analysis, alt competition sites, and ASC

A bit of a potpourri blog post today. First, I am not much of a natural language processing wiz. But based on the work of Peter Baumgartner at RTI (assigning reduced form codes based on text descriptions), I was pointed out the simpletransformers library. It is very easy to download complicated NLP architectures (like RoBERTa with 100 million+ parameters) and retrain them to your idiosyncratic data.

Much of the issue working with text data is the cleaning, and with these extensive architectures they are not so necessary. See for example this blog post on classifying different toxic comments. Out of the box the multi-label classification gets an AUC score pretty damn close to the winning entry in the Kaggle contest this data was developed for. No text munging necessary.

Playing around on my personal machine I have been able to download and re-tune the pretrained RoBERTa model – doing that same model as the blog post (with just all the defaults for the model), it takes around 7 hours of my GPU.

The simpletransformers library has a ton of different pre-set architectures for different problems. But the ones I have played around with with labelled data (e.g. you have text data on the right hand side, and want to predict a binary or multinomial outcome), I have had decent success with.

Another text library I have played around with (although have not had as much success in production) is dirty_cat. This is for unsupervised modeling, which unfortunately is a harder task to evaluate what is successful than supervised learning.

Alt Competition Sites

I recently spent two days trying to work on a recent Kaggle competition, a follow up to the toxic comments one above. My solution is nowhere close to the current leaderboard though, and given the prize total (and I expect something like 5,000 participants), this just isn’t worth my time to work on it more.

Two recent government competitions I did compete in though, the NIJ recidivism, and the NICHD maternal morbidity. (I will release my code for the maternal morbidity when the competition is fully scored, it is a fuzzy one not a predictive best accuracy one.) Each of these competitions had under 50 teams participate, so it is much less competition than Kaggle. The CDC has a new one as well, for using a network based approach to violence and drug problems.

For some reason these competitions are not on the Challenge.gov website. Another site I wanted to share as well is DataDriven competitions. If I had found that sooner I might have given the floodwater competition a shot.

I have mixed feeling about the competitions, and they are risky. I probably spent for NIJ and NICHD what I would consider something like $10,000 to $20,000 of my personal time on the code solutions (for each individually). I knew NIJ would not have many submissions (I did not participate in the geographic forecasting, and saw some people win with silly strategies). If you submitted anything in the student category you would have won close to the same amount as my team did (as not all the slots were filled up). And the NICHD was quite onerous to do all the paperwork, so I figured would also be low turnout (and the prizes are quite good). So whether I think it is worth it for me to give a shot is guessing the total competition pool, level of effort to submit a good submission, and how the prizes are divvied up as well as the total dollar amount.

The CDC violence one is strangely low prizes, so I wouldn’t bother to submit unless I already had some project I was working on anyway. I think a better use of the Fed challenges would be to have easier pilot work, and based on the pilot work fund larger projects. So consider the initial challenge sort of equivalent to a grant proposal. This especially makes sense for generating fairness algorithms (not so much for who has the best hypertuned XGBoost model on a particular train/test dataset).

Missing ASC

The American Society of Criminology conference is going on now in Chicago. A colleague emailed the other day asking if I was coming, and I do feel some missing of meeting up with friends. The majority of presentations are quite bad (both for content and presentation style), so it is more of an excuse to have a beer with friends than anything.

I debated with my wife about taking a family vacation to Chicago during this conference earlier in the year. We decided against it for the looming covid – I correctly predicted it would still be quite prevalent (and I am guessing it will be indefinitely at this point given vaccine hesitancy and new variants). I incorrectly predicted though I wouldn’t be able to get a vaccine shot until October (so very impressed with the distribution on that front). Even my son has a shot (didn’t even try to guess when that would happen). So I am not sure if I made the correct choice in retrospect – the risk of contraction is as high as I guessed, but risk of adverse effects given we have the vaccines are very low.