Fitting beta binomial in python, Poisson scan stat in R

Sharing two pieces of code I worked on recently for various projects. First is fitting a beta binomial distribution in scipy. I had a need for this the other day, looking at the count of near duplicates in surveys. In that code I just used the method of moments estimator (which can often misbehave).

The top google results for this are all very bad (either code that does not work, or alternatives that are not good advice). One of the articles was even auto-generated content (ala ChatGPT type stuff), that had a few off the mark points (although was superficially what the python solution should look like, so the unwary would be led down a wrong path).

So here I show how to estimate the maximum likelihood estimate for the beta binomial distribution. First, because scipy already has a function for the pmf for the beta-binomial distribution it is pretty simple. For all of the discrete distributions, it should look like -dist.logpmf(..data..,...params...).sum(). In complicated stats speak this is “the negative of the sum of the log likelihood”. It is easier for me anyway to think in terms of the PDF/PMF though (the probability of observing your data given fixed parameters). And you find the parameters that maximize that probability over your entire sample. But to make the math easier we take the logs of the probabilities (so we can work with sums instead of multiplications), the log PMF here, and we take the negative so we find the minimum of the function.

Then you just pass the appropriate arguments to minimize and you are good to go.

import numpy as np
from scipy.optimize import minimize
from scipy.stats import betabinom
np.random.seed(10)

# simulating some random data
a = 0.8
b = 1.2

sim_size = 1000

n = 90
r = betabinom.rvs(n, a, b, size=sim_size)

# minimize negative log likelihood
def bbll(parms,k,n):
    alpha, beta = parms
    ll = betabinom.logpmf(k,n,alpha,beta)
    return -ll.sum()

result = minimize(bbll,[1,1],args=(r,90),method='Nelder-Mead')
print(result.x) # [alpha, beta]

And this returns for me:

>>> print(result.x)
[0.77065051 1.16323611]

Using simple simulations you can often get a feel for different estimators. Here n and sim_size make a decent difference for estimating beta, and beta I think tends to be biased downward in the smaller sample scenarios. (People don’t realize, for these non-normal distributions it is not un-common to need 1000+ observations to get decent un-biased estimates depending on the distribution.)

Note the nature of the data here, it is something like hits [5,8,9], and then a second either constant for every number (if the denominator is say 10 for all the observations, can just pass a constant 10). The denominator can however be variable in this set up, so you could have a set of different denominators like [6,8,10].

a = 1.5
b = 4.1

n = np.random.choice(range(30,90),size=sim_size)
r = betabinom.rvs(n, a, b, size=sim_size)

result = minimize(bbll,[1,1],args=(r,n),method='Nelder-Mead')
print(result.x)

Which returns:

>>> print(result.x)
[1.50563582 3.99837155]

I note here that some examples of beta-binomial use weighted data (the wikipedia page does this). These functions expect unweighted data. Functions that need to be repeatedly called (like the likelihood function here) I don’t like making them general with ifs and other junk, I would rewrite the bbll function for different forms of data and call that different function.

Also, as always, you need to check these to make sure the fitted parameters make sense and reasonably fit your data (plot the predicted PMF vs the observed histogram). The function here can converge, but could converge to non-sense (you probably don’t need to worry about constraints on the parameters, but better starting values are probably a good idea).

For future notes for myself, Guimaraes (2005) has examples of using fixed effects negative binomial and translating to beta-binomial (for fixed n). Also Young-Xu & Chan (2008) is a very nice reference (has Hessian, so if I wanted to estimate standard errors), as well as discussion of determining whether to use this model or a binomial with no extra dispersion.


The second thing I will post about is a scan statistic. The background is imagine someone comes to you and says “Hey, there were 10 robberies in the past week, is that normal or low?”. In the scenario where you have fixed time intervals, e.g. Monday through Sunday, and your data is approximately Poisson distributed, you can calculate the CDF. So say your mean per week over 2 years is 3.1, the probability of observing a count of 10 or more in a specific week is:

> # R code
> 1 - ppois(10-1,3.1)
> [1] 0.001400924

So alittle more than 1 in 1000. But if you ask the question “What is the probability of observing a single week of 10 or more, when I have been monitoring the series for 2 years”, with 52 weeks per year. You would adjust the probability for monitoring the trends for multiple weeks over time:

> p <- ppois(10-1,3.1)
> 1 - p^(52*2)
> [1] 0.1356679

So the probability of observing 10 or more in a single week over a 2 year period is closer to 14%. This multiple comparison issue is more extreme when you consider a sliding window – so can count events that occur in a span of a week, but not all necessarily in your pre-specified Monday to Sunday time period. So maybe you observed 10 in a week span that goes from Wednesday to Tuesday. What is the probability of observing 10 in that ad-hoc week time period over the 3 year monitoring period? This often comes up in news articles, see this post by David Spiegelhalter on pedestrian deaths for an example.

I have added in the Naus (1982) approximate statistic to calculate this in my ptools R package – scanw. If you install the latest version of ptools from github you can run for yourself:

> # R code
> library(devtools)
> install_github("apwheele/ptools") # get most recent
> library(ptools)
>
> # example given above
> scanw(52*2,1,3.1,10)

Which prints out [1] 0.5221948. So adding in the sliding window considerably ups the probability of observing a large clump of events.

I don’t think this is so useful from a crime analyst perspective, moreso from a journalistic perspective ‘oh we saw this number recently, is that anomalous’. If you are actively monitoring crime stats I would suggest you use the stats I describe in Wheeler (2016) to identify current outliers given fixed time intervals from the start. (And this approximation is for Poisson data. Overdispersed will have a higher probability.)

And for reference as well, Prieto-Curiel et al. (2023) have an approach that examines the cumulative sum. I’ve debated on doing that in a control chart style framework as well, but instead of just cumsum(counts), do cumsum(counts - expected). I don’t know how people effectively reset the cumulative charts though and effectively deal with seasonality.

I think my approach in Wheeler (2016) is better to identify anomalous trends right now, the Prieto-Curiel approach is still examining historical data and looking for breaks.

References

ptools R package update

So as an update to my R package ptools, I have bumped a major version change to 2.0, which is now up on CRAN.

There is no new functionality, but I wanted to bump versions because I swapped out using rgdal/rgeos with sf (rgdal and rgeos are being deprecated). All the functions currently still take as inputs/output sp objects. If I ever get around to it, I will convert the functions to take either. They are somewhat inefficient with conversions, but if you are doing something where it matters you should likely switch data-engineering to another system entirely (such as via SQL in postgis directly). Generating hexagons should actually be faster now, as the sf version I swapped out is vectorized (whereas how I was using sp prior was a loop).

I debate every now and then just letting this go. I can see on cranlogs I have a total of just over 1k (as of 2/7/2023) downloads, and averaged 200 some last month (grand total, last month).

Time is finite, so I have debated on dropping this and just porting most of the functions over into python. Those cumulative downloads are partially bots (I may have racked up 100 of those downloads in my CICD actions). Let me know if you actually use this, as that gives me feedback whether to bother continuing to develop/update this.