Conformal Sets Part 2, Estimating Precision

After publishing my post yesterday, in which I said you could not estimate the false positive rate using conformal sets, I realized there was a way given the nature of if you know some parts of the contingency table you can estimate the other parts. In short if you know the proportion of the outcome in your sample, which you can use the same calibration sample to estimate (and should be reasonable given the same exchangeability assumption for conformal sets to work to begin with) you can estimate the false positive rate (or the precision of your estimate given a particular threshold).

Here we know, given a particular threshold, the percentage estimates for the cells below:

           True
          0     1 
       -----------
Pred 0 | tn  | fn |
       ------------
     1 | fp  | tp |
       ------------
         X0    X1

I added two details to the table, X0 and X1, which I take to be the column counts for the cells. So imagine we have a result where you have 90% coverage for the positive class, and 60% coverage for the negative class. Also assume that the proportion of 1’s is 20%. So we have this percentage table:

           True
          0     1 
       -----------
Pred 0 | 60%  | 10% |
       ------------
     1 | 40%  | 90% |
       ------------

Now pretend to translate to counts, where we have 80 0’s and 20 1’s (my 20% specified above):

           True
          0     1 
       -----------
Pred 0 | 48  |  2 |
       ------------
     1 | 32  | 18 |
       ------------
         80    20

So for our false positive estimate, we have 32/(32 + 18) = 0.64. Pretend our fp,tp etc. are our original percent metrics, we could then write our table as:

            True
          0         1 
       -----------
Pred 0 | tn*X0  | fn*X1 |
       ------------
     1 | fp*X0  | tp*X1 |
       ------------
         X0        X1

And so our false positive equation is then:

    fp*X0
--------------
(fp*X0 + tp*X1)

We can make this more dimensionless, by setting X0 = 1, and then writing X0 = m*X1 = 1. Then we can update the above equation to simply be:

    fp*X0                    fp
--------------       =>  ----------
  fp*X0 + tp*X0*m         fp + tp*m

The factor m is prop/(1 - prop), where prop is the proportion of 1s, here 0.2, so m is 0.2/0.8 = 0.25. So if we do 0.4/(0.4 + 0.9*0.25) = 0.64. And that is our false positive estimate for that threshold across the sample, and our precision estimate is the complement, so 36%.

I have updated my code on github, but here is an example class to help calculate all of these metrics. It is short enough to put right on the blog:

import numpy as np
from scipy.stats import ecdf

class ConfSet:
    def __init__(self,y_true,y_score):
        self.p1 = y_score[y_true == 1]
        self.p0 = y_score[y_true == 0]
        self.prop = y_true.mean()
        self.m = self.prop/(1 - self.prop)
        self.ecdf1 = ecdf(self.p1)
        self.ecdf0 = ecdf(self.p0)
    def Cover1(self,k):
        res = np.percentile(self.p1,100-k)
        return res
    def Cover0(self,k):
        res = np.percentile(self.p0,k)
        return res
    def PCover(self,p):
        cov1 = self.ecdf1.sf.evaluate(p)
        cov0 = self.ecdf0.cdf.evaluate(p)
        # false positive estimate
        fp = 1 - cov0
        fprate = fp/(fp + cov1*self.m)
        return cov0, cov1, fprate

So based on your calibration sample, you pass in the predicted probabilities and true outcomes. After that you can either calculate recall (or zero class recall) using the Cover methods. Or given a particular threshold you can calculate the different metrics. Here is an example using this method (with the same data from yesterday) to replicate the metrics in that post (for coverage for 0, I am always working with the predicted probability for the positive class, not its complement):

And this provides a much better out of sample estimator of false positive rates than using the PR curve directly.

The method makes it easy to do a grid search over different thresholds and calculating metrics:

And here is an ugly graph to show that the out of sample matches up very closely to the estimates:

I debated on adding a method to solve for the false positive rate, but I think just doing a grid search may be the better approach. I am partially concerned in small samples the false positive estimate will not be monotonic (a similar problem happens in AUC calculations with censored data). But here in this sample at this grid level it is monotonic – and most realistic cases I have personally worrying about the third digit in predicted probabilities is just noise at that point.

Leave a comment

1 Comment

  1. Conformal Sets and Setting Recall | Andrew Wheeler

Leave a comment