An example of soft constraints in linear programming

Most of the prior examples of linear programming on my site use hard constraints. These are examples where I say to the model, “only give me results that strictly meet these criteria”, like “only select 40 cases to audit”, or “keep the finding rate over 50%”, etc.

There are alternative ways though to tell the model, “I want to select a finding rate over 50%, but still potentially consider those with lower finding rates”. One way to do that is via soft constraints, modifying the objective function directly to penalize (or favor) particular outcomes. For example, say you knew you could translate a 1% finding rate difference over 50% to a value of $1000. So if our original model is:

Maximize Sum{D_i*Return_i} 

Subject To
  D_i element of (0,1) #decision variables are 0/1
  Sum{D_i} = 100       #so select 100 cases

We would then place an additional penalty term that looks like this:

Maximize Sum{D_i*Return_i} + Sum{D_i*[(prob_i - 0.5)*1000]}  

Subject To
  D_i element of (0,1)
  Sum{D_i} = 100

So instead of a subject to constraint that says we need to be over 50% finding rate, we added a second penalty term for solutions that have an under 50% finding rate. So here if the finding rate in the end is 49%, it takes a hit of $1000 in the objective function. This example is also similar to a bi-objective function, here I just set an exact translation between finding rates and returns, but in practice often you don’t have that exact translation.

It just depends on your situation whether hard constraints or soft constraints make sense. Many situations you can swap one for the other, so different means same ends. For a good example of this, my allocating police resources paper on reducing disproportionate minority contact uses hard constraints (Wheeler, 2020), and George Mohler and colleagues have a very similar paper which uses soft constraints (Mohler et al., 2018). I imagine these will end up being very similar ends, although in that circumstance I prefer my hard constraint approach, as George’s you need to fiddle with the magnitude of the penalty term. Also I don’t tend to like changing the loss function for statistical/machine learning models, I just like changing what you do with the info after you have fit your model (Kleinberg et al., 2018).

Here I provide an example of where I think soft constraints make a bit of sense though. Imagine you have continuous predictive outputs, you need to make a binary yes/no decision among those options, but those predictive outputs also have a variance. An example of where this comes up is if you are making loan decisions, you want your portfolio to have a high return, but you also want to lower the variance of those returns as well.

For a simple example, imagine you are the lending institution and you have the choice between two scenarios:

  • Scenario A: lend 1 person $100,000 with an expected return of $8,000, with a variance of $4,000
  • Scenario B: lend 2 people $50,000, with an expected return of each for $4,000 each, with a variance of $1,000 for each loan

Since you expect to make the same amount of money under each scenario, option B is preferable if the loans are independent of each other (e.g. one going under does not cause the other to go under). In that case, variances are additive, so the total variance of option B is $2,000, so has much less volatility than does the A scenario.

(Sorry to my criminology friends, this example is generic but I strain to find a criminal justice example to apply it to. It would not be crazy that you have low volatility vs high volatility hot spots. So you may want to choose a consistent hot spot as oppossed to a fleeting one for an intervention. But I don’t think that will happen in practice quite like that. Choosing among expensive high risk/reward vs inexpensive treatment regimes low risk/reward in corrections settings may also make sense, but that is crazy pie in the sky technocratic given the current state of affairs as well.)

Example with Lending Club Data

So to illustrate an example with actual data, I’ve provide Python code fitting a predictive model to Lending Club data on loans. (I got the original dataset from Kaggle.) I am just going to highlight some key points here in the blog post. You will need to go to the code to see everything.

First, I’ve been introduced to this dataset as predicting a binary default/no-default. I have code doing that in the code snippet as well, and it performed OK. But it was very uncalibrated as to whether my portfolio made money – so even though the default estimates were pretty well spot on, my portfolios did not make much money. This is because people who default pay back some loans, and also quite a few people in the dataset pay back the loans fast, so the lenders don’t make as much as interest as you would expect at the start of the loan.

So I cut out the middle man and just estimated a random forest model predicting the actual money one made on the loan. I only kept cases that are either 'Fully Paid','Charged Off', 'Default', so I don’t model loans that are still ongoing. I end up modeling then the value total_pymnt - loan_amnt. You can look into the code to see the variables I included in the model, but one of the neat things about regression random forests is that you can not only get the mean prediction, you can also look at the variance over all the trees. See below a function to do that (in the 01 py file):

###############################################################
#Fit random forest model
model = RandomForestRegressor(n_estimators=1000, 
                              random_state=10, 
                              min_samples_leaf=100)
model.fit(train[x_vars], train[y_var])

#Check the predicted vs actual on the test set
y_pred = model.predict(test[x_vars]) #predicted mean
test['y_pred'] = y_pred

#I want an estimate of the variance
def tree_var(X, rf_mod):
    per_tree_pred = [pd.Series(tree.predict(X), index=X.index) for tree in rf_mod.estimators_]
    pd_res = pd.concat(per_tree_pred, axis=1)
    pd_var = pd_res.var(axis=1)
    return pd_var

test['y_var'] = tree_var(X=test[x_vars], rf_mod=model)
###############################################################

And that predicted value and variance are then what I feed into my subsequent linear programming problem (in the 02 py file). The model in some more text is:

Maximize Sum{D_i*(prediction - lambda*variance)}

Subject To:
  D element of (0,1)                #decision variables
  Sum{D_i*loan_amount_i} <= 300000  #only have so much $ to loan, so no leveraging
  

Where lambda is the tuner – higher variances will get higher penalties. So going back to our two loan example, if lambda = 1, scenario A it would be 8000 - 1*4000 = 4000, and scenario B would be 8000 - 1*2000 = 6000, so that penalty would choose scenario B over A. Whereas without the penalty the two scenarios are exactly the same.

Since the lambda value is arbitrary, I illustrate the approach selecting portfolios of loans that are a total of $300,000 (I divided the loans by $1000 to make the numbers a bit easier to view). This is a totally held out sample of around 5k loans. So you can see my first model (with no constraints):

And my second model with a higher lambda value of 1 selects more (smaller) loans, and reduces the variance. You can see since we have the actual outcomes, I can show that both portfolios turned a profit, each above what I predicted. But the standard deviation for second portfolio is cut not quite in half.

So you can see in that one selection it worked out OK, but this does not verify that my variance estimates are correct (they are no doubt too small, as you can see the actual returns are way higher than I predicted).

To test them out though I do a simulation. I draw 1000 cases out of those 5000, and then again pick $300k in loans. I do that process 1000 times under a set of different lambda penalty terms, [0, 0.5, 1.0, 2.0, 3.0]. For those simulations, here is the overall distribution of the returns under different penalty terms for the variance.

Note that those histograms have different X axes. It is easier to see the moments of the spread in a boxplot:

So here you can see each scenario has pretty near the same median return (somewhere around $30k), but the penalties reduce the variance. The higher penalties end up selecting portfolio’s that always at least make money, whereas the lower penalty terms you do end up losing money in some scenarios.

Unfortunately I did not beat the market in my simple weekend experimentation, so don’t sink a bunch of money into Lending club based on this! The average returns starting from $300k are something like an annualized rate of return of around 3% (over 3 years) for the smaller simulation pick from 1000. These include loans of 60 months as well though. So even though my linear program with the penalty term did a good job of reducing the risk, this isn’t good enough returns for me to put a bunch of my money into Lending Club.

But there is no doubt improvements both to the modeling as well as the portfolio selection. For modeling I would be tempted to try out a discrete time survival model for payments over time, but that would be more work than I could do in a weekend. (Also I only incorporated the easy continuous variables here I could prepare in just a few minutes, so maybe more feature engineering would boost my results.)

I could also adapt the linear program to take into account covariance between the loans, but not sure how to estimate them (a multi-level model perhaps?). You also may want to do some sort of conditional value at risk approach in the linear program, say instead of piping in the variance from random forests, count up how often you lose money and put that as a penalty or constraint on the system.

References

Adjusting predicted probabilities for sampling

Ryx had a blog post the other day about how many confuse how to turn predicted probabilities into a final classification 0/1 decision, Why Do So Many Practicing Data Scientists Not Understand Logistic Regression?. (Highly suggest Ryx’s blog and twitter feed in general, opinions expressed frequently mirror my own very closely.)

So I won’t go into why SMOTE (synthetic upsampling of the rare class) in general doesn’t make sense for most predictive applications here. (It may in some complicated machine learning scenarios I don’t fully understand.) But, there are a few scenarios where downsampling makes total sense. One is in case control studies, where it is costly to sample the control cases. (Motivated in part to write this as I reviewed a paper the other day that estimated marginal effects on the probability scale using case-control data, so they need to adjust them using the same technique I show here.) The other scenario, which I expect is more common for the working data scientist, is you just have a crazy large dataset, so you need to downsample just to fit the model of interest.

Say you are looking at fraud in bank transactions, and you have 500 million transactions and only 500,000 identified fraud cases. It makes total sense to take a sample of 500,000 control cases and then fit your models on the cases vs controls using whatever complicated model you want just so you can get an answer in a decent time on your local machine.

The predicted probabilities from that adjusted sample though will be wrong. But fortunately it is quite easy to adjust them back to the scale you want (and this will work just as well for SMOTE upsampling as well). I illustrate using an example in python and XGBoost. Most examples online show this for GLMs, but it works the same way for any model that returns a predicted probability.

So first lets load our libraries and create some simulated data. Here the positive class only occurs around 5% of the time.

#############################################
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from xgboost import XGBClassifier

#Creating simulated data

np.random.seed(10)
total_cases = 1000000
x1 = np.random.uniform(size=total_cases)
x2 = np.random.binomial(1,0.5,size=total_cases)
x3 = np.random.poisson(5,size=total_cases)
y_logit = -1 + 0.3*x1 + 0.1*x2 + -0.5*x3 + 0.05*x1*x2 + -0.03*x2*x3
y_prob = 1/(1 + np.exp(-y_logit))
y_bin = np.random.binomial(1,y_prob)
my_vars = ['x1','x2','x3','y_prob','y_bin']
sim_data = pd.DataFrame(zip(x1,x2,x3,y_prob,y_bin),
                        columns=my_vars)
x_vars = my_vars[:3]
y_var = my_vars[-1]

#Checking the distribution, make intercept larger if
#Not enough 0's to your liking
print( sim_data[y_var].mean() )
sim_data['y_prob'].hist(bins=100)
#############################################

The model is not that complicated (just some interactions), so hopefully XGBoost has no problem fitting it. But say we are in the ‘need to downsample because my data is too big scenario’. So here I create a training dataset that has a 50/50 split for the positive negative cases, and keep the test data the same.

#############################################
#Creating test/train samples

sim_data['index'] = range(sim_data.shape[0])
train = sim_data[sim_data['index'] <= 700000]
test = sim_data[sim_data['index'] > 700000]

#downsampling the training dataset
#so classes are equal
down_n = train['y_bin'].sum()
down_prop = train['y_bin'].mean()
down_neg = train[train['y_bin'] == 0].sample(n=down_n)
down_pos = train[train['y_bin'] == 1]
train_down = pd.concat([down_neg,down_pos],axis=0)

wrong_pr =  train_down[y_var].mean()
print( wrong_pr )
#############################################

So if you are following along in python, wrong_pr here is exactly 0.5 by construction. So next we fit our XGBoost model, generate the predicted probabilities on the test dataset, and then draw a lift-calibration chart. (If you are not familiar with what XGBoost is, I suggest this statquest series of videos. You can just pretend it is a black box here though that you get out predicted probabilities.)

#############################################
#Upping the number of trees for a higher resolution of 
#predicted probabilities
model = XGBClassifier(n_estimators=1000)
model.fit(train_down[x_vars], train_down[y_var])

#making predictions for the test set
#probabilities
y_pred = model.predict_proba(test[x_vars])
test['y_pred_down'] = y_pred[:,1]

#Now looking at the calibration
def cal_data(prob, true, data, bins, plot=False, figsize=(6,4), save_plot=False):
    cal_dat = data[[prob,true]].copy()
    cal_dat['Count'] = 1
    cal_dat['Bin'] = pd.qcut(cal_dat[prob], bins, range(bins) ).astype(int) + 1
    agg_bins = cal_dat.groupby('Bin', as_index=False)['Count',prob,true].sum()
    agg_bins['Predicted'] = agg_bins[prob]/agg_bins['Count']
    agg_bins['Actual'] = agg_bins[true]/agg_bins['Count']
    if plot:
        fig, ax = plt.subplots(figsize=figsize)
        ax.plot(agg_bins['Bin'], agg_bins['Predicted'], marker='+', label='Predicted')
        ax.plot(agg_bins['Bin'], agg_bins['Actual'], marker='o', markeredgecolor='w', label='Actual')
        ax.set_ylabel('Probability')
        ax.legend(loc='upper left')
        if save_plot:
            plt.savefig(save_plot, dpi=500, bbox_inches='tight')
        plt.show()
    return agg_bins

cal_down = cal_data(prob='y_pred_down',true=y_var,data=test,bins=60,
                    plot=True, figsize=(6,6)) 
#############################################

Oh my, you can see that our calibration is very bad. I am predicting something to happen 80% of the time in the top bin, but in reality it only happens 20% of the time. But we can fix it using an adjustment formula (originally saw the idea from Norm Matloff and rewrote an R function to python).

#############################################
#Rebalancing function

#rewrite from
#https://github.com/matloff/regtools/blob/master/inst/UnbalancedClasses.md
#and https://www.listendata.com/2015/04/oversampling-for-rare-event.htm
def classadjust(condprobs,wrongprob,trueprob):
    a = condprobs/(wrongprob/trueprob)
    comp_cond = 1 - condprobs
    comp_wrong = 1 - wrongprob
    comp_true = 1 - trueprob
    b = comp_cond/(comp_wrong/comp_true)
    return a/(a+b)

test['y_pred_adj'] = classadjust(test['y_pred_down'], wrong_pr, down_prop)

cal_adj = cal_data(prob='y_pred_adj',true=y_var,data=test,bins=60,
                    plot=True, figsize=(6,6))
#############################################

Alright, now we are much closer. It still is overpredicting at the highest classes (predicts 40% of the time when it should predict only 20%), but it is well calibrated for low predictions.

For kicks, I estimated the XGBoost model using the original sample data that is unbalanced and calculated the calibration plot.

#############################################
#What does it look like if I train with original?

model2 = XGBClassifier(n_estimators=1000) 
model2.fit(train[x_vars], train[y_var]) #takes a minute!

#making predictions for the test set
#probabilities
y_pred = model2.predict_proba(test[x_vars])
test['y_pred_orig'] = y_pred[:,1]

cal_adj = cal_data(prob='y_pred_orig',true=y_var,data=test,bins=60,
                    plot=True, figsize=(6,6))
#############################################

So this is slightly better than the adjusted downsampled XGBoost, but it still shows it is overpredicting for the tail of the dataset. But the overprediction here is like 31% vs 23%, where prior we were talking about 40% vs 20%.

Why these calibration metrics matter is that to generate estimates of how much money your model is making in practice will almost always rely on correctly estimating predicted probabilities, which translate into true positives and false negatives. If the model is not well calibrated, your estimates of these will be gravely off.

It doesn’t always matter, these transformations don’t change the rank order of the predictions. So say you are always just grabbing the top 100 predictions, this adjustment does not change what predictions are in the top 100. It will change how many of those cases though you expect to translate into true positives though.

 

Using Association Rules to Conduct Conjunctive Analysis

I’ve suggested to folks a few times in the past that a popular analysis in CJ, called conjunctive analysis (Drawve et al., 2019; Miethe et al., 2008; Hart & Miethe, 2015), could be automated in a fashion using a popular machine learning technique called association rules. So I figured a blog post illustrating it would be good.

I was motivated by some recent work by Nix et al. (2019) examining officer involved injuries in NIBRS data. So I will be doing a relevant analysis (although not as detailed as Justin’s) to illustrate the technique.

This ended up being quite a bit of work. NIBRS is complicated, and I had to do some rewrites of finding frequent itemsets to not run out of memory. I’ve posted the python code on GitHub here. So this blog post will be just a bit of a nicer walkthrough. I also have a book chapter illustrating geospatial association rules in SPSS (Wheeler, 2017).

A Brief Description of Conjunctive Analysis

Conjunctive analysis is more of an exploratory technique examining high cardinality categorical sets. Or in other words, you search though a database of cases that have many categories to find “interesting” patterns. It is probably easier to see an example than for me to describe it. Here is an example from Miethe et al. (2008):

You can see that here they are looking at characteristics of drug offenders, and then trying to identify particular sets of characteristics that influence the probability of a prison sentence. So this is easy to do in one dimension, it gets very difficult in multiple dimensions though.

Association rules were created for a very different type of problem – identifying common sets of items that shoppers buy together at the same time. But you can borrow that work to aid in conducting conjunctive analysis.

Data Prep for NIBRS

So here I am using 2012 NIBRS data to conduct analysis. Like I mentioned, I was motivated by the Nix and company paper examining officer injuries. They were interested in specifically examining officer involved injuries, and whether the perception that domestic violence cases were more dangerous for officers was justified.

For brevity I only ended up examining five different variable sets in NIBRS (Justin has quite a few more in his paper):

  • assault (or injury) type V4023
  • victim/off relationship V4032
  • ucr type V2006
  • drug use V2009 (also includes computer use!)
  • weapon V2017

All of these variables have three different item sets in the NIBRS codes, and many categories. You will have to dig into the python code, 00_AssocRules.py in the GitHub page to see how I recoded these variables.

Also maybe of interest I have some functions to do one-hot encoding of wide data. So a benefit of NIBRS is that you can have multiple crimes in one incident. So e.g. you can have one incident in which an assault and a burglary occurs. I do the analysis in a way that if you have common co-crimes they would pop out.

Don’t take this as very formal though. Justin’s paper which used 2016 NIBRS data only had 1 million observations, whereas here I have over 5 million (so somewhere along the way me and Justin are using different units of analysis). Also Justin’s incorporates dozens of other different variables into the analysis I don’t here.

It ends up being that with just these four variables (and the reduced sets of codes I created), there still end up being 34 different categories in the data.

Analysis of Frequent Item Sets

The first part of conjunctive analysis (or association rules) is to identify common item sets. So the work of Hart/Miethe is always pretty vague about how you do this. Association rules has the simple approach that you find any item sets, categories in which a particular itemset meets an arbitrary threshold.

So the way you represent the data is exactly how the prior Miethe et al. (2008) data showed, you create a series of dummy 0/1 variables. Then in association rules you look for sets in which for different cases all of the dummy variables take the value of 1.

The code 01_AssocRules.py on GitHub shows this going from the already created dummy variable data. I ended up writing my own function to do this, as I kept getting out of memory errors using the mlextend library. (I don’t know if this is due to my data is large N but smaller number of columns.) You can see my freq_sets function to do this.

Typically in association rules you identify item sets that meet a particular support threshold. Support here just means the proportion of cases that those items co-occur. E.g. if 20% of cases of assault also have a weapon of fists listed. Instead though I wrote the code to have a minimum N, which I choose here to be 1000 cases. (So out of 5 million cases, this is a support of 1/5000.)

I end up finding a total of 411 frequent item sets in the data that have at least 1000 cases (out of the over 5 million). Here are a few examples, with the frequencies to the left. So there are over 2000 cases in the 2012 NIBRS data that had a known relationship between victim/offender, resulted in assault, the weapon used was fists (or kicking), and involved computer use in some way. I only end up finding two itemsets that have 5 categories and that is it, there are no higher sets of categories that have at least 1000 cases in this dataset.

3509    {'rel_Known', 'ucr_Assault', 'weap_Fists', 'ucr_Drug'}
2660    {'rel_Known', 'ucr_Assault', 'weap_Firearm', 'ucr_WeaponViol'}
2321    {'rel_Known', 'ucr_Assault', 'weap_Fists', 'drug_ComputerUse'}
1132    {'rel_Known', 'ucr_Assault', 'weap_Fists', 'weap_Knife'}
1127    {'ucr_Assault', 'weap_Firearm', 'weap_Fists', 'ucr_WeaponViol'}
1332    {'rel_Known', 'ass_Argument', 'rel_Family', 'ucr_Assault', 'weap_Fists'}
1416    {'rel_Known', 'rel_Family', 'ucr_Assault', 'weap_Fists', 'ucr_Vandalism'}

Like I said I was interested in using NIBRS because of the Nix example. One way we can then examine what variables are potentially related to officer involved injuries during a commission of a crime would be to just pull out any itemsets which include the variable of interest, here ass_LEO_Assault.

4039    {'ass_LEO_Assault'}
1232    {'rel_Known', 'ass_LEO_Assault'}
4029    {'ucr_Assault', 'ass_LEO_Assault'}
1856    {'ass_LEO_Assault', 'weap_Fists'}
1231    {'rel_Known', 'ucr_Assault', 'ass_LEO_Assault'}
1856    {'ucr_Assault', 'ass_LEO_Assault', 'weap_Fists'}

So we see there are a total of just over 4000 officer assaults in the dataset. Unsurprisingly almost all of these also had an UCR offense of assault listed (4029 out of 4039).

Analysis of Association Rules

Sometimes just identifying the common item sets is what is of main interest in conjunctive analysis (see Hart & Miethe, 2015 for an example of examining the geographic characteristics of crime events).

But the apriori algorithm is one way to find particular rules that are of the form if A occurs then B occurs quite often, but swap out more complicated itemsets in the antecedent (A) and consequent (B) in the prior statement, and different ways of quantifying ‘quite often’.

I prefer conditional probability notation to the more typical association rule one, but for typical rules we have (here I use A for antecedent and B for consequent):

  • confidence: P(A & B) / P(B). So if the itemset of just B occurs 20% of the time, and the itemset of A and B together occurs 10% of the time, the confidence would be 50%. (Or more simply the probability of B conditional on A, P(B | A)).
  • lift: confidence(A,B) / P(B). This is a ratio of the baseline a category occurs for the denominator, and the numerator is the prior confidence category. So if you have a baseline B occurring 25% of the time, and the confidence of A & B is 50%, you would then have a lift of 2.

There are other rules as well that folks use, but those are the most common two I am interested in.

So for example in this data if I draw out rules that have a lift of over 2, I find rules like {'ucr_Vandalism', 'rel_Family'} -> {'ass_Argument'} produces a lift of over 6. (I can use the mlextend implementation here in this code, it was only the frequent itemsets code that was giving me problems.) So it ends up being arguments are listed in the injury codes around 1.6% of the time, but when you have a ucr crime of vandalism, and the relationship between victim/offender are family members, injury type of argument happens around 10.5% of the time (so 10.5/1.6 ~= 6).

The original use case for this is recommender systems/market analysis (so say if you see someone buy A, give them a coupon for B). So this ends up being not so interesting in this NIBRS example when you have you have more clear cause-effect type relationships criminologists would be interested in. But I describe in the next section some further potential machine learning models that may be more relevant, or how I might in the future amend the apriori algorithm for examining specific outcomes.

Further Notes

If you have a particular outcome you are interested in a specific outcome from the get go (so not so much totally exploratory analysis as here), there are a few different options that may make more sense than association rules.

One is the RuleFit algorithm, which basically just uses a regularized regression to find simple models and low order interactions. An example of this idea using police stop data is in Goel et al. (2016). These are very similar in the end to simple decision trees, you can also have continuous covariates in the analysis and it splits them into binary above/below rules. So you could say do RTM distance analysis, and still have it output a rule if < 1000 ft predict high risk. But they are fit in a way that tend to behave better out of sample than doing simple decision trees.

Another is fitting a more complicated model, say random forests, and then having reduced form summaries to describe those models. I have some examples of using shapely values for spatial crime prediction in Wheeler & Steenbeek (2020), but for a more if-then type sets of rules you could look at Scoped Rules.

I may need to dig into the association rules code some more though, and try to update the code to take the sample sizes and statistical significance into account for a particular outcome variable. So if you find higher lift in a four set predicting a particular outcome, you search the tree for more sets with a smaller support in the distribution. (I should probably also work on some cool network viz. to look at all the different rules.)

References

 

Balancing False Positives

One area of prediction in criminal justice I think has alot of promise is using predictive algorithms in place of bail decisions. So using a predictive instrument to determine whether someone is detained pre-trial based on risk, or released on recognizance if you are low risk. Risk can be either defined as based on future dangerousness or flight risk. This cuts out the middle man of bail, which doesn’t have much evidence of effectiveness, and has negative externalities of placing economic burdens on folks we really don’t want to pile that onto. It is also the case algorithms can likely do quite a bit better than judges in figuring out future risk. So an area I think they can really do good compared to current status quo in the CJ system.

A reasonable critique of such systems though is they can have disparate racial impact. For example, ProPublica had an article on how the Compas risk assessment instrument resulted in more false positives for black than white individuals. Chris Stucchio has a nice breakdown for why this occurs, which is not due to the Compas being intrinsically racist algorithm, but due to the nature of the baseline risks for the two groups.

Consider a very simple example to illustrate. Imagine based on our cost-benefit analysis, we determine the probability threshold to flag a individual as high risk is 60%. Now say our once we apply our predictions, for those above the threshold, whites are all predicted to be 90%, and blacks are all 70%. If our model is well calibrated (which is typically the case), the false positive rate for whites will be 10%, and will be 30% for blacks.

It is actually a pretty trivial problem though to balance false positive rates between different groups, if that is what you want to do. So I figured I would illustrate here using the same ProPublica data. There are trade-offs though with this, balancing false positives means you lose out on other metrics of fairness. In particular, it means you don’t have equality of treatment – different racial groups will have different thresholds. The full data and code I use to illustrate this can be downloaded here.

An Example in Python

To illustrate how we would balance the false positive rates between groups, I use the same ProPublica risk assessment data. So this isn’t per se for bail decisions, but works fine as an illustration. First in python I load my libraries, and then read in the data – it is a few over 11,000 cases.

import pandas as pd
import os
import numpy as np
from sklearn.ensemble import RandomForestClassifier
import matplotlib.pyplot as plt

my_dir = r'C:\Users\andre\Dropbox\Documents\BLOG\BalanceFalsePos'
os.chdir(my_dir)

#For notes on data source, check out 
#https://github.com/apwheele/ResearchDesign/tree/master/Week11_MachineLearning
recid = pd.read_csv('PreppedCompas.csv')
print( recid.head() )

Next I prepare the dataset for modelling. I am not using all of the variables in the dataset. What I predict here is recidivism post 30 days (there are a bunch of recidivism right away in the dataset, so I am not 100% sure those are prior to screening). I use the three different aggregate compas scores, juvenile felony count, whether they were male, how old they were, and whether the current charge to precipitate screening is a felony or misdemeanor. I include the race variable in the dataset, but I won’t be using it in the predictive model. (That point deserves another blog post, contra to what you might expect, leaving race flags in will often result in better outcomes for that protected class.)

#Preparing the variables I want
recid_prep = recid[['Recid30','CompScore.1','CompScore.2','CompScore.3',
                    'juv_fel_count','YearsScreening']]
recid_prep['Male'] = 1*(recid['sex'] == "Male")
recid_prep['Fel'] = 1*(recid['c_charge_degree'] == "F")
recid_prep['Mis'] = 1*(recid['c_charge_degree'] == "M")
recid_prep['race'] = recid['race']
print( recid['race'].value_counts() ) #pretty good sample size for both whites/blacks

Next I make my testing and training sets of data. In practice I can perfectly balance false positives retrospectively. But having a test set is a better representation of reality, where you need to make some decisions on the historical data and apply it forward.

#Now generating train and test set
recid_prep['Train'] = np.random.binomial(1,0.75,len(recid_prep))
recid_train = recid_prep[recid_prep['Train'] == 1]
recid_test = recid_prep[recid_prep['Train'] == 0]

Now the procedure I suggest to balance false-positives doesn’t matter how you generate the predictions, just that we need a predicted probability. Here I use random forests, but you could use whatever machine learning or logistic regression model you want. Second part just generates the predicted probabilities for the training dataset.

#Now estimating the model
ind_vars = ['CompScore.1','CompScore.2','CompScore.3',
            'juv_fel_count','YearsScreening','Male','Fel','Mis'] #no race in model
dep_var = 'Recid30'
rf_mod = RandomForestClassifier(n_estimators=500, random_state=10)
rf_mod.fit(X = recid_train[ind_vars], y = recid_train[dep_var])

#Now getting the predicted probabilities in the training set
pred_prob = rf_mod.predict_proba(recid_train[ind_vars] )
recid_train['prob'] = pred_prob[:,1]
recid_train['prob_min'] = pred_prob[:,0]

Now to balance false positives, I will show a graph. Basically this just sorts the predicted probabilities in descending order for each racial group. Then you can calculate a cumulate false positive rate for different thresholds for each group.

#Making a cusum plot within each racial group for the false positives
recid_train.sort_values(by=['race','prob'], ascending=False, inplace=True)
recid_train['const'] = 1
recid_train['cum_fp'] = recid_train.groupby(['race'])['prob_min'].cumsum()
recid_train['cum_n'] = recid_train.groupby(['race'])['const'].cumsum()
recid_train['cum_fpm'] = recid_train['cum_fp'] / recid_train['cum_n']
white_rt = recid_train[recid_train['race'] == 'Caucasian']
black_rt = recid_train[recid_train['race'] == 'African-American' ] 

And now the fun part (and least in output, not really in writing matplotlib code).

#now make the chart for white and black
fig, ax = plt.subplots()
ax.plot(black_rt['prob'], black_rt['cum_fpm'], drawstyle='steps', color='b', label='Black')
ax.plot(white_rt['prob'], white_rt['cum_fpm'], drawstyle='steps', color='r', label='White')
ax.set_xlim(1, 0)  # decreasing probs
plt.xticks(np.arange(1.0,-0.1,-0.1))
ax.set_xlabel('Predicted Probability')
ax.set_ylabel('Mean False Positive Rate')
ax.grid(True,linestyle='--')
ax.legend(facecolor='white', framealpha=1)
plt.savefig('FP_Rate.png', dpi=2000, bbox_inches='tight')
plt.show()

So what this chart shows is that if we set our threshold to a particular predicted probability (X axis), based on the data we would expect a false positive rate (Y axis). Hence if we want to balance false positives, we just figure out the race specific thresholds for each group at a particular Y axis value. Here we can see the white line is actually higher than the black line, so this is reverse ProPublica findings, we would expect whites to have a higher false positive rate than blacks given a consistent predicted probability of high risk threshold. So say we set the threshold at 10% to flag as high risk, we would guess the false positive rate among blacks in this sample should be around 40%, but will be closer to 45% in the white sample.

Technically the lines can cross at one or multiple places, and those are places where you get equality of treatment and equality of outcome. It doesn’t make sense to use that though from a safety standpoint – those crossings can happen at a predicted probability of 99% (so too many false negatives) or 0.1% (too many false positives). So say we wanted to equalize false positive rates at 30% for each group. Here this results in a threshold for whites as high risk of 0.256, and for blacks a threshold of 0.22.

#Figuring out where the threshold is to limit the mean FP rate to 0.3
#For each racial group
white_thresh = white_rt[white_rt['cum_fpm'] > 0.3]['prob'].max()
black_thresh = black_rt[black_rt['cum_fpm'] > 0.3]['prob'].max()
print( white_thresh, black_thresh )

Now for the real test, lets see if my advice actually worked in a new sample of data to balance the false positive rate.

#Now applying out of sample, lets see if this works
pred_prob = rf_mod.predict_proba(recid_test[ind_vars] )
recid_test['prob'] = pred_prob[:,1]
recid_test['prob_min'] = pred_prob[:,0]

white_test = recid_test[recid_test['race'] == 'Caucasian']
black_test = recid_test[recid_test['race'] == 'African-American' ]

white_test['Flag'] = 1*(white_test['prob'] > white_thresh)
black_test['Flag'] = 1*(black_test['prob'] > black_thresh)

white_fp= 1 - white_test[white_test['Flag'] == 1][dep_var].mean()
black_fp = 1 - black_test[black_test['Flag'] == 1][dep_var].mean()
print( white_fp, black_fp )

And we get a false positive rate of 54% for whites (294/547 false positives), and 42% for blacks (411/986) – yikes (since I wanted a 30% FPR). As typical, when applying your model to out of sample data, your predictions are too optimistic. I need to do some more investigation, but I think a better way to get error bars on such thresholds is to do some k-fold metrics and take the worst case scenario, but I need to investigate that some more. The sample sizes here are decent, but there will ultimately be some noise when deploying this in practice. So basically if you see in practice the false positive rates are within a few percentage points that is about as good as you can get in practice I imagine. (And for smaller sample sizes will be more volatile.)

A shout out for ScraperWiki

I’ve used the online tool ScraperWiki a few times to scrape some online data into a simple spreadsheet. It provides a set of tools to code in Python (and a few other languages) online in the browser and dump the resulting set of information into an online table. It also allows you to set the data collection on a timer and updates the table with new information. It is just a nice set of tools that makes getting and sharing the data easier.

For a few sample scripts I’ve used it for in the past:

  • Grabbing the number of chat posts from Stack Overflow chat rooms.
  • Grabbing the user location from profiles on the GIS.SE site. (This is really redundant with the power to query the SO data explorer – it can be more up to date though.)
  • Scraping the NYPD reported precinct statistics that were uploaded in PDF form. (Yes you can turn PDFs into xml which you can subsequently parse).

I previously used what is now called ScraperWiki Classic. But since the service is migrating to a new platform my prior scripts will not continue downloading data.

It appears my NYPD data scraper does not work any more anyway (they do have an online map now – but unfortunately are not providing the data in a convenient format for bulk download to the NYC Open Data site last I knew). But it has some historical data from the weekly precinct level crime stats between the end of 2010 and through most of 2013 if you are interested.

I’ve migrated the chat posts scraper to the new site, and so it will continue to update with new data for the activity in the SO R Room and the main chat room on the CrossValidated Stats Site. I’ll have to leave analysing that data for a future post, but if you are interested in amending the scraper to download data for a different room it is pretty easy. Just fork it and add/change the rooms of interest to the url_base array. You can download a csv file of the data directly from this link. The Mono field is a count of the unique "Monologue" tags in the html, and the Baseroom shows what chat room the data is from. Note that the Mono value shows 0 even if the room did not exist at that date.

I’m not sure how to make the scraper viewable to the public, but you can still view the code on the classic site. Note the comment about the CPU time isn’t applicable to the main site; I could download the transcripts all the way back to 2010 and I did not receive any errors from ScraperWiki.