Python f string number formatting and SPSS break long labels

Another quick blog post, as moving is not 100% crazy all the time now, but I need a vacation after all that work. So two things in this blog post: formatting numeric f strings in python, and breaking long labels in SPSS meta-data.

Python f-string numeric formatting

This is super simple, but I can never remember it (so making a quick blog post for my own reference). As of python 3.6, you can use f-strings to do simple text substitution. So if you do:

x = 2/3
sub_str = f'This proportion is {x}'
print(sub_str)

Then we will get printed out This proportion is 0.6666666666666666. So packing global items inside of {} expands within the f string. While for more serious string subsitution (like creating parameterized SQL queries), I like to use string templates, these f-strings are very nice to print short messages to the console or make annotations in graphs.

Part of this note is that I never remember how to format these strings. If you are working with integers it is not a big deal, but as you can see above I often do not want to print out all those decimals inside my particular message. A simple way to format the strings are:

f'This proportion is {x:.2f}'

And this prints out to two decimal places 'This proportion is 0.67'. If you have very big numbers (say revenue), you can do something like:

f'This value is ${x*10000:,.0f}'

Which prints out 'This value is $6,667' (so you can modify objects in place, to say change a proportion to a percentage).

Note also to folks that you can have multi-line f-strings by using triple quotes, e.g.:

f'''This is a super
long f-string for {x:.2f}
on multiple lines!'''

But one annoying this is that you need to keep the whitespace correct inside of functions even inside the triple string. So those are cases I like using string templates. But another option is to break up the string and use line breaks via \n.

long_str = (f'This is line 1\n'
            f'Proportion is {x:.1f}\n'
            f'This is line 3')
print(long_str)

Which prints out:

This is line 1
Proportion is 0.7
This is line 3

You could do the line breaks however, either at the beginning of each line or at the end of each line.

SPSS break long labels

This was in reference to a project where I was working with survey data, and for various graphs I needed to break up long labels. So here is an example to illustrate the problem.

* Creating simple data to illustrate.
DATA LIST FREE / X Y(2F1.0).
BEGIN DATA
1 1
2 2
3 3
4 4
END DATA.
DATASET NAME LongLab.
VALUE LABELS X
  1 'This is a reallllllllly long label'
  2 'short label'
  3 'Super long unnecessary label that is long'
  4 'Again another long label what is up with this'
.
VARIABLE LABELS
  X 'Short variable label'
  Y 'This is also a super long variable label that is excessive!'
.
EXECUTE.

GGRAPH
  /GRAPHDATASET NAME="g" VARIABLES=X Y
  /GRAPHSPEC SOURCE=INLINE.
BEGIN GPL
  SOURCE: s=userSource(id("g"))
  DATA: X=col(source(s), name("X"), unit.category())
  DATA: Y=col(source(s), name("Y"))
  COORD: rect(dim(1,2), transpose())
  GUIDE: axis(dim(1))
  GUIDE: axis(dim(2), label("Value"))
  SCALE: linear(dim(2), include(0))
  ELEMENT: interval(position(X*Y))
END GPL.

So you can see, SPSS shrinks the data to accommodate the long labels. (I don’t know how to control the behavior in the graph or the chart template itself, so not sure why only this gets wrapped for the first label.) So we can use the \n line break trick again in SPSS to get these to split where we prefer. Here are some python functions to do the splitting (which I am sure can be improved upon), as well as to apply the splits to the current SPSS dataset. You can decide the split where you want the line to be broken, and so if a word goes above that split level it wraps to the next line.

* Now some python to wrap long labels.
BEGIN PROGRAM PYTHON3.
import spss, spssaux

# Splits a long string with line breaks
def long_str(x,split):
    split_str = x.split(" ")
    cum = len(split_str[0])
    cum_str = split_str[0]
    for s in split_str[1:]:
        cum += len(s) + 1
        if cum <= split:
            cum_str += " " + s
        else:
            cum_str += r"\n" + s
            cum = len(s)
    return cum_str

# This grabs all of the variables in the current SPSS dataset
varList = [spss.GetVariableName(i) for i in range(spss.GetVariableCount())]

# This looks at the VALUE LABELS and splits them up on multiple lines
def split_vallab(vList, lsplit):
    vardict = spssaux.VariableDict()
    for v in vardict:
        if v in vList:
            vls= v.ValueLabels.keys()
            if vls:
                for k in vls:
                    ss = long_str(v.ValueLabels[k], lsplit)
                    if ss != v.ValueLabels[k]:
                        vn = v.VariableName
                        cmd = '''ADD VALUE LABELS %(vn)s %(k)s \'%(ss)s\'.''' % ( locals() )
                        spss.Submit(cmd)

# I run this to split up the value labels
split_vallab(varList, 20)

# This function is for VARIABLE LABELS
def split_varlab(vList,lsplit):
    for i,v in enumerate(vList):
        vlab = spss.GetVariableLabel(i)
        if len(vlab) > 0:
            slab = long_str(vlab, lsplit)
            if slab != vlab:
                cmd = '''VARIABLE LABELS %(v)s \'%(slab)s\'.''' % ( locals() )
                spss.Submit(cmd)

# I don't run this right now, as I don't need it
split_varlab(varList, 30)
END PROGRAM.

And now we can re-run our same graph command, and it is alittle nicer:

GGRAPH
  /GRAPHDATASET NAME="g" VARIABLES=X Y
  /GRAPHSPEC SOURCE=INLINE.
BEGIN GPL
  SOURCE: s=userSource(id("g"))
  DATA: X=col(source(s), name("X"), unit.category())
  DATA: Y=col(source(s), name("Y"))
  COORD: rect(dim(1,2), transpose())
  GUIDE: axis(dim(1))
  GUIDE: axis(dim(2), label("Value"))
  SCALE: linear(dim(2), include(0))
  ELEMENT: interval(position(X*Y))
END GPL.

And you can also go to the variable view to see my inserted line breaks:

SPSS still does some auto-intelligence when to wrap lines in tables/graphs (so if you do DISPLAY DICTIONARY. it will still wrap the X variable label in my default tables, even though I have no line break). But this gives you at least a slight bit of more control over charts/tables.

Some ACS download helpers and Research Software Papers

The blog has been a bit sparse recently, as moving has been kicking my butt (hanging up curtains and recycling 100 boxes today!). So just a few quick notes.

Downloading ACS Data

First, I have posted some helper functions to work with American Community Survey data (ACS) in python. For a quick overview, if you import/define those functions, here is a quick example of downloading the 2019 Texas micro level files (for census tracts and block groups) from the census FTP site. Can pipe in another year (if available) and and whatever state into the function.

# Python code to download American Community Survey data
base = r'??????' #put your path here where you want to download data
temp = os.path.join(base,'2019_5yr_Summary_FileTemplates')
data = os.path.join(base,'tables')

get_acs5yr(2019,'Texas',base)

Some locations have census tract data to download, I think the FTP site is the only place to download block group data though. And then based on those files you downloaded, you can then grab the variables you want, and here I show selecting out the block groups from those fields:

interest = ['B03001_001','B02001_005','B07001_017','B99072_001','B99072_007',
            'B11003_016','B11003_013','B14006_002','B01001_003','B23025_005',
            'B22010_002','B16002_004','GEOID','NAME']
labs, comp_tabs = merge_tabs(interest,temp,data)
bg = comp_tabs['NAME'].str.find('Block Group') == 0

Then based on that data, I have an additional helper function to calculate proportions given two lists of the numerators and denominators that you want:

top = ['B17010_002',['B11003_016','B11003_013'],'B08141_002']
bot = ['B17010_001',        'B11002_001'       ,'B08141_001']
nam = ['PovertyFamily','SingleHeadwithKids','NoCarWorkers']
prep_sdh = prop_prep(bg, top, bot, nam)

So here to do Single Headed Households with kids, you need to add in two fields for the numerator ['B11003_016','B11003_013']. I actually initially did this example with census tract data, so not sure if all of these fields are available at the block group level.

I have been doing some work on demographics looking at the social determinants of health (see SVI data download, definitions), hence the work with census data. I have posted my prior example fields I use from the census, but criminologists may just use the social-vulnerability-index from the CDC – it is essentially the same as how people typically define social disorganization.

Peer Review for Criminology Software

Second, jumping the gun a bit on this, but in the works is an overlay journal for CrimRxiv. Part of the contributions we will accept are software contributions, e.g. if you write an R package to do some type of analysis function common in criminology.

It is still in the works, but we have some details up currently and a template for submission (I need to work on a markdown template, currently just a word doc). High level I wanted something like the Journal of Statistical Software or the Journal of Open Source Software (I do not think the level of detail of JSS is necessary, but wanted an example use case, which JoSS does not have).

Just get in touch if you have questions whether your work is on topic. Aim is to be more open to contributions at first. Really excited about this, as publicly sharing code is currently a thankless prospect. Having a peer reviewed venue for such code contributions for criminologists fills a very important role that traditional journals do not.

Future Posts?

Hopefully can steal some time to continue writing posts here and there, but will definitely be busy getting the house in order in the next month. Hoping to do some work on mapping grids and KDE in python/geopandas, and writing about the relationship between healthcare data and police incident report data are two topics I hope to get some time to work on in the near future for the blog.

If folks have requests for particular topics on the blog though feel free to let me know in the comments or via email!

Costs and Benefits and CrimeSolutions.gov

The Trace the other day presented an article giving a bit of (superficial overall in the end) critique of CrimeSolutions.gov. They are right in that the particular scenario with the Bronx defenders office highlights the need for a change in the way content aggregators like CrimeSolutions presents overall recommendations. I have reviewed for CrimeSolutions, and I think they did a reasonable job in creating a standardized form, but will give my opinion here about how we can think about social programs like the Bronx defenders program beyond the typical null hypothesis significance testing – we need to think about overall costs and benefits of the programs. The stat testing almost always just focuses on the benefits part, not the cost part.

But first before I go into more details on CrimeSolutions, I want to address Thomas Abt’s comments about potential political interference in this process. This is pizzagate level conspiracy theory nonsense from Abt. So the folks reviewing for Crime Solutions are other professors like me (or I should more specifically say I was a former professor). I’d like to see the logic from Abt how Kate Bowers, a professor at University College London, is compromised by ties to Donald Trump or the Republican Party.

Us professors get a standardized form to fill in the blank on the study characteristics, so there is no reasonable way that the standardized form biases reviews towards any particular political agenda. They are reviewed by multiple people (e.g. if I disagree with another researcher, we have emails back and forth to hash out why we had different ratings). So it not only has to be individuals working for the man, but collusion among many of us researchers to be politically biased like Abt suggests.

The only potential way I can see any political influence in the process is if people at DSG selectively choose particular studies. (This would only make sense though to say promote more CJ oriented interventions over other social service type interventions). Since anyone can submit a study (even non US ones!) highly skeptical political bias happens in that aspect either. Pretty sure the DSG folks want people to submit more studies FYI.

FYI Abt’s book Bleeding Out is excellent, not sure why he is spouting this nonsense about politics in this case though. So to be clear claiming political bias in these reviews is total non-sense, but of course the current implementation of the CrimeSolutions final end recommendation could be improved. (I really like the Trace as well, have talked to them before over Gio’s/my work on shooting fatalities, this article however doesn’t have much meat to critique CrimeSolutions beyond some study authors are unhappy and Abt’s suggestion of nefarious intentions.)

How does CrimeSolutions work now?

At a high level, CrimeSolutions wants to be a repository for policy makers to help make simple decisions on different policy decisions – what I take as a totally reasonable goal. So last I knew, they had five different end results a study could fall into (I am probably violating some TOS here sharing this screenshot but whatever, we do alot of work filling in the info as a reviewer!) These include Effective, Promising, Ineffective, Null Effect, and Inconclusive.

You get weights based on not only the empirical evidence presented, but aspects of the design itself (e.g. experiments are given a higher weight than quasi-experiments), the outcomes examined (shorter time periods less weight than longer time periods), the sample size, etc. It also includes fuzzy things like description of the program (enough to replicate), and evidence presented of adherence to the program (which gets the most points for quantitative evidence, but has categories for qualitative evidence and no evidence of fidelity as well).

So Promising is basically some evidence that it works, but the study design is not the strongest. You only get null effect is the study design is strong and there were no positive effects found. Again I mean ‘no positive effects’ in the limited sense that there are crime end points specified, e.g. reduced recidivism, overall crime counts in an area, etc. (it is named CrimeSolutions). But there can of course be other non-crime beneficial aspects to the program (which is the main point of this blog post).

When I say at the beginning that the Trace article is a bit superficial, it doesn’t actually present any problems with the CrimeSolutions instrument beyond the face argument hey I think this recommendation should be different! If all you take is someone not happy with the end result we will forever be unhappy with CrimeSolutions. You can no doubt ex ante make arguments all day long why you are unhappy for any idiosyncratic reason. You need to objectively articulate the problems with the CrimeSolutions instrument if you want to make any progress.

So I can agree that the brand No Effect for the Bronx defenders office does not tell the whole story. I can also say how the current CrimeSolutions instruments fails in this case, and can suggest solutions about how to amend it.

Going Beyond p-values

So in the case of the Bronx Defenders analysis, what happens is that the results are not statistically significant in terms of crime reductions. Also because it is a large sample and well done experimental design, it unfortunately falls into the more damning category of No Effects (Promising or Inconclusive are actually more uncertain categories here).

One could potentially switch the hypothesis testing on its head and do non-inferiority tests to somewhat fit the current CrimeSolutions mold. But I have an approach I think is better overall – to evaluate the utility of a program, you need to consider both its benefits (often here we are talking about some sort of crime reduction), as well as its costs:

Utility = Benefits - Costs

So here we just want Benefits > Costs to justify any particular social program. We can draw this inequality as a diagram, with costs and benefits as the two axes (I will get to the delta triangle symbols in a minute). Any situation in which the benefits are greater than the costs, we are on the good side of the inequality – the top side of the line in the diagram. Social programs that are more costly will need more evidence of benefits to justify investment.

Often we are not examining a program in a vacuum, but are comparing this program to another counter-factual, what happens if that new proposed program does not exist?

Utility_a = Benefits_a - Costs_a : Program A's utility
Utility_s = Benefits_s - Costs_s : Status Quo utility

So here we want in the end for Utility_a > Utility_s – we rather replace the current status quo with whatever this program is, as it improves overall utility. It could be the case that the current status quo is do nothing, which in the end is Utility_s = Benefits_s - Costs_s = 0 - 0 = 0.

It could also be the case that even if Benefits_a > Costs_a, that Utility_a < Utility_s – so in that scenario the program is beneficial, but is worse in overall utility to the current status quo. So in that case even if rated Effective in current CrimeSolutions parlance, a city would not necessarily be better off ponying up the cash for that program. We could also have the situation Benefits_a < Costs_a but Utility_a > Utility_s – that is the benefits of the program are still net negative, but they still have better utility than the current status quo.

So to get whether the new proposed program has added utility over the status quo, we take the difference in two equations:

  Utility_a = Benefits_a - Costs_a : Program A's utility
- Utility_s = Benefits_s - Costs_s : Status Quo utility
--------------------------------------------------------
Δ Utility = Δ Benefits - Δ Costs

And we end up with our changes in the graph I showed before. Note that this implies a particular program can actually have negative effects on crime control benefits, but if it reduces costs enough it may be worth it. For example Megan Stevenson argues pre-trial detention is not worth the costs – although it no doubt will increase crime some, it may not be worth it. Although Stevenson focuses on harms to individuals, she may even be right just in terms of straight up costs of incarceration.

For the Bronx defenders analysis, they showed no benefits in terms of reduced crime. But the intervention was a dramatic cost savings compared to the current status quo. I represent the Bronx defenders results as a grey box in the diagram. It is centered on the null effects for crime benefits, but is clearly in the positive utility part of the graph. If it happened that it was expensive or no difference in costs though, the box would shift right and not clearly be in the effective portion.

For another example, I show the box as not a point in this graph, but an area. An intervention can show some evidence of efficacy, but not reach the p-value < 0.05 threshold. The Chicago summer jobs program is an example of this. It is rated as no effects. I think DSG could reasonably up the sample size requirement for individual recidivism studies, but even if this was changed to the promising or inconclusive recommendation in CrimeSolutions parlance the problem still remains by having a binary yes/no end decision.

So here the box has some uncertainty associated with it in terms of the benefits, but has more area on the positive side of the utility line. (These are just generic diagrams, not meant to be an exact representation, it could be more area of the square should be above the positive utility line given the estimates.) If the authors want to argue that the correct counter-factual status quo is more expensive – so it would shift the pink box to the left – it could as is be a good idea to invest in more. Otherwise it makes sense for the federal govt to invest in more research programs trying to replicate, although from a local govt perspective may not be worth the risk to invest in something like this given the uncertainty. (Just based on the Chicago experiment it probably would be worth the risk for a local govt IMO, but I believe overall jobs and crime programs have a less than stellar track record.)

So these diagrams are nice, but it leaves implicit how CrimeSolutions would in practice measure costs to put this on the diagram. Worst case scenario costs are totally unknown (so would span the entire X axis here, but in many scenarios I imagine people can give reasonable estimates of the costs of social programs. So I believe a simple solution to the current CrimeSolutions issue is two-fold.

  1. They should incorporate costs somewhere into their measurement instrument. This could either be as another weighted term in the Outcome Evidence/Primary Outcomes portion of the instrument, or as another totally separate section.
  2. It should have breakdowns on the website that are not just a single final decision endpoint, but show a range of potential results in a diagram like I show here. So while not quite as simple as the binary yes/no in the end, I believe that policy makers can handle that minor bit of added level of complexity.

Neither of these will make CrimeSolutions foolproof – but better to give suggestions to improve it than to suggest to get rid of it completely. I can forsee issues of defining in this framework what are the relevant costs. So the Stevenson article I linked to earlier talks about individual harm, it may be someone can argue that is not the right cost to calculate (and could do something like a willingness to pay experiment). But that goes for the endpoint outcomes as well – we could argue whether or not they are reasonable for the situation as well. So I imagine the CrimeSolutions/DSG folks can amend the instrument to take these cost aspects into account.

Down the rabbit hole with R functions

I had a friend the other day ask me about modifying the plot that goes with R’s boxCox function. In particular they had multiple plots, and wanted to make the Y axes consistent between the different dependent variables. So for a typical R base plot call, you can specify ylim = c(whatever_low, whatever_high), but if you look at function in the end it does not let you do this yourself (it fixes ylim based on the log-likelihood range.

library(car)
data(trees)
# Making a second Y variable for illustration later
trees$V2 <- trees$Volume*2 + 3*rnorm(nrow(trees))

# Original function, https://rdrr.io/rforge/car/man/boxCox.html
orig_output <- with(trees, boxCox(Volume ~ log(Height) + log(Girth), data = trees))

So if we look at the orig_output object, it gives us the x and y values for the above plot, but it does not give us the dashed line locations in the plot.

Typically here I would type out boxCox without the parenthesis at the prompt to get the function definition. That does not quite work here, as it is unhelpful and just gets us the message useMethod(boxCox). From here we can do the function method(boxCox) to help slightly more – we can see that the boxCox function really has 3 different functions, that depend on the original input.

Here we are specifying the formula interface to the function call, so lets look at getAnywhere(boxCox.formula):

Well, that is not very helpful, lets look at getAnywhere(boxCox.default) instead:

Ok, that is what we are going for. If you look into the function, at the very end you will see how it draws those dashed reference lines (anything drawn with lty = 2 in the code).

So what is happening here is that the different boxCox function calls are all daisy chained together, and it goes from formula -> lm object -> the original boxCox function. Now that we can see the function, we can make some small changes to have it return the locations of the vertical/horizontal reference lines that we want (or we could change it to accept a ylim argument directly). I name this new function boxCox.new.

# Modifying the function to return all the info you need
boxCox.new <- function(object, lambda = seq(-2, 2, 1/10), plotit = TRUE, interp = plotit, 
    eps = 1/50, xlab = NULL, ylab = NULL, family = "bcPower", 
    param = c("lambda", "gamma"), gamma = NULL, grid = TRUE, 
    ...) 
{
    if (class(object)[1] == "mlm") 
        stop("This function is for univariate response only")
    param <- match.arg(param)
    ylab <- if (is.null(ylab)) {
        if (family != "bcnPower") 
            "log-likelihood"
        else {
            if (param == "gamma") {
                expression(max(logL[gamma](lambda, gamma)))
            }
            else {
                expression(max[lambda](logL(lambda, gamma)))
            }
        }
    }
    else ylab
    xlab <- if (is.null(xlab)) {
        if (param == "lambda") 
            expression(lambda)
        else expression(gamma)
    }
    else xlab
    #fam <- matchFun(family) #Needed to change this to base function
    fam <- match.fun(family)
    if (is.null(object$y) || is.null(object$qr)) 
        stop(paste(deparse(substitute(object)), "does not have both 'qr' and 'y' components"))
    y <- object$y
    n <- length(y)
    xqr <- object$qr
    xl <- loglik <- if (family != "bcnPower") 
        as.vector(lambda)
    else {
        if (param == "lambda") 
            as.vector(lambda)
        else {
            if (!is.null(gamma)) 
                as.vector(gamma)
            else {
                p1 <- powerTransform(object, family = "bcnPower")
                gam <- p1$gamma
                se <- sd(y)
                seq(max(0.01, gam - 3 * se), gam + 3 * se, length = 100)
            }
        }
    }
    m <- length(xl)
    if (family != "bcnPower") {
        for (i in 1L:m) {
            yt <- fam(y, xl[i], j = TRUE)
            loglik[i] <- -n/2 * log(sum(qr.resid(xqr, yt)^2))
        }
    }
    else {
        lambda.1d <- function(gamma) {
            fn <- function(lam) bcnPowerllik(NULL, y, NULL, lambda = lam, 
                gamma = gamma, xqr = xqr)$llik
            f <- optimize(f = fn, interval = c(-3, 3), maximum = TRUE)
            f$objective
        }
        gamma.1d <- function(lambda) {
            fn <- function(gam) bcnPowerllik(NULL, y, NULL, lambda = lambda, 
                gamma = gam, xqr = xqr)$llik
            f <- optimize(f = fn, interval = c(0.01, max(y)), 
                maximum = TRUE)
            f$objective
        }
        for (i in 1L:m) {
            loglik[i] <- if (param == "lambda") 
                gamma.1d(loglik[i])
            else lambda.1d(loglik[i])
        }
    }
    if (interp) {
        sp <- spline(xl, loglik, n = 100)
        xl <- sp$x
        loglik <- sp$y
        m <- length(xl)
    }
    if (plotit) {
        mx <- (1L:m)[loglik == max(loglik)][1L]
        Lmax <- loglik[mx]
        lim <- Lmax - qchisq(19/20, 1)/2
        # Adding in vector to contain x functions location and top line
        xF <- c()
        xT <- c()
        plot(xl, loglik, xlab = xlab, ylab = ylab, type = "n", 
            ylim = range(loglik, lim))
        if (grid) {
            grid(lty = 1, equilogs = FALSE)
            box()
        }
        lines(xl, loglik)
        plims <- par("usr")
        abline(h = lim, lty = 2)
        y0 <- plims[3L]
        scal <- (1/10 * (plims[4L] - y0))/par("pin")[2L]
        scx <- (1/10 * (plims[2L] - plims[1L]))/par("pin")[1L]
        text(xl[1L] + scx, lim + scal, " 95%")
        la <- xl[mx]
        if (mx > 1 && mx < m) 
            segments(la, y0, la, Lmax, lty = 2)
            xF <- c(xF, la)
            xT <- c(xT, Lmax)
        ind <- range((1L:m)[loglik > lim])
        if (loglik[1L] < lim) {
            i <- ind[1L]
            x <- xl[i - 1] + ((lim - loglik[i - 1]) * (xl[i] - 
                xl[i - 1]))/(loglik[i] - loglik[i - 1])
            segments(x, y0, x, lim, lty = 2)
            xF <- c(xF, x)
            xT <- c(xT, lim)
        }
        if (loglik[m] < lim) {
            i <- ind[2L] + 1
            x <- xl[i - 1] + ((lim - loglik[i - 1]) * (xl[i] - 
                xl[i - 1]))/(loglik[i] - loglik[i - 1])
            segments(x, y0, x, lim, lty = 2)
            xF <- c(xF, x)
            xT <- c(xT, lim)
        }
    # See definitions of hline, vlines, vtop, ybase, just returning that info
    return(list(x = xl, y = loglik, hline = lim, vlines = xF, vtop = xT, ybase = y0))
    }
    list(x = xl, y = loglik)
}

But this won’t work offhand with just calling boxCox.new with our same prior function calls, so we need to just entirely replace the original boxCox.default function for our daisy chain of function references to work. Here can use the assignInNamespace function to effectively overwrite the original.

# Need to do this to get it to work with lm objects
assignInNamespace("boxCox.default",boxCox.new,ns="car")

r1 <- with(trees, boxCox(Volume ~ log(Height) + log(Girth), data = trees))
r2 <- with(trees, boxCox(V2 ~ log(Height) + log(Girth), data = trees))

And now if we inspect either r1 or r2 you can see it returns the info we want.

And now we build own our set of plots. I don’t have the nice text annotations (or the default grid lines), but leave that to the reader to do that extra work.

par(mfrow=c(2,1), mai = c(1, 1, 0.2, 1))
plot(r1$x,r1$y,ylim=c(-160,-70), type='l', xaxp = c(-160,-70, 8),
     xlab=expression(lambda),ylab='log-Likelihood')
# You need to specify the bottom of the segment to match your limit
abline(h = r1$hline, lty = 2)
segments(r1$vlines, -160, r1$vlines, r1$vtop, lty = 2)
plot(r2$x, r2$y,ylim=c(-160,-70), type='l', xaxp = c(-160,-70, 8),
     xlab=expression(lambda),ylab='log-Likelihood')
segments(r2$vlines, -160, r2$vlines, r2$vtop, lty = 2)
abline(h = r2$hline, lty = 2)

I have done this previously for default plots in base R that I wanted to make myself in ggplot, which you could do here as well and do a facetted plot instead of the par deal with multiple rows (ggplot takes care of the spacing a bit nicer). But that is too much work for this quick tip to cajole those different data frames to do the facets for ggplot.

Minimum detectable effect sizes for place based designs

So I was reading Blattman et al.’s (2018) work on a hot spot intervention in Bogotá the other day. It is an excellent piece, but in a supplement to the paper Blattman makes the point that while his study is very high powered to detect spillovers, most other studies are not. I am going to detail here why I disagree with his assessment on that front.

In appendix A he has two figures, one for the direct effect comparing the historical hot spot policing studies (technically he uses the older 2014 Braga study, but here is the cite for the update Braga et al., 2020).

The line signifies a Cohen’s D of 0.17, and here is the same graph for the spillover estimates:

So you can see Blattman’s study in total number of spatial units of analysis breaks the chart so to speak. You can see however there are plenty of hot spot studies in either chart that reported statistically significant differences, but do not meet the 0.17 threshold in Chris’s chart. How can this be? Well, Chris is goal switching a bit here, he is saying using his estimator the studies appear underpowered. The original studies on the graph though did not necessarily use his particular estimator.

The best but not quite perfect analogy I can think of is this. Imagine I build a car that gets better gas mileage compared to the current car in production. Then someone critiques this as saying the materials that go into production of the car have worse carbon footprints, so my car is actually worse for the environment. It would be fine to argue a different estimate of total carbon footprint is reasonable (here Chris could argue his estimator is better than the ones the originally papers used). It is wrong though to say you don’t actually improve gas mileage. So it is wrong for Chris to say the original articles are underpowered using his estimator, they may be well powered using a different estimator.

Indeed, using either my WDD estimator (Wheeler & Ratcliffe, 2018) or Wilson’s log IRR estimator (Wilson, 2021), I will show how power does not grow with more experimental units, but with a larger baseline number of crimes for those estimators. They both only have two spatial units of analysis, so in Chris’s chart will never gain more power.

One way I think about the issue for spatial designs is this – you could always split up a spatial lattice into ever finer and finer spatial units of analysis. For example Chris could change his original design to use addresses instead of street segments, and split up the spillover buffers into finer slices as well. Do you gain something for doing nothing though? I doubt it.

I describe in my dissertation how finer spatial units of analysis allow you to check for finer levels of spatial spillovers, e.g. can check if crime spills over from the back porch to the front stoop (Wheeler, 2015). But when you do finer spatial units, you get more cold floor effects as well due to the limited nature of crime counts – they cannot go below 0. So designs with lower baseline crime rates tend to show lower power (Hinkle et al., 2013).

MDE for the WDD and log IRR

For minimum detectable effect (MDE) sizes for OLS type estimators, you need to specify the variance you expect the underlying treated/control groups to have. With the count type estimators I will show here, the variance is fixed according to the count. So all I need to specify is the alpha level of the test. Here I will do a default of 0.05 alpha level (with different lines for one-tailed vs two-tailed). The other assumption is the distribution of crime counts between treated/control areas. Here I assume they are all equal, so 4 units (pre/post and treated/control). For my WDD estimator this actually does not matter, for the later IRR estimator though it does (so the lines won’t really be exact for his scenario).

So here is the MDE for mine and Jerry’s WDD estimator:

What this means is that if you have an average of 20 crimes in the treated/control areas for each time period separately, you would need to find a reduction of 15 crimes to meet this threshold MDE for a one-tailed. It is pretty hard when starting with low baselines! For an example close to this, if the treated area went from 24 to 9, and the control area was 24 to 24, this would meet the minimal treated reduction of 15 crimes in this example.

And here is the MDE for the log IRR estimator. The left hand Y axis has the logged effect, and the right hand side has the exponentiated IRR (incident rate ratio).

Since the IRR is commonly thought of as a percent reduction, this suggests even with baselines of 200 crimes, for Wilson’s IRR estimator you need percent reductions of over 20% relative to control areas.

So I have not gone through the more recent Braga et al. (2020) meta-analysis. I do not know if they have the data readily available to draw the points on this plot the same as in the Blattman article. To be clear, it may be Blattman is right and these studies are underpowered using either his or my estimator, I am not sure. (I think they probably are quite underpowered to detect spillover, since this presumably will be an even smaller amount than the direct effect. But that would not explain estimates of diffusion of benefits commonly found in these studies!)

I also do not know if one estimator is clearly better or not – for example Blattman could use my estimator if he simply pools all treated/control areas. This is not obviously better than his approach though, and foregoes any potential estimates of treatment effect variance (I will be damned if I can spell that word starting with het even close enough for autocorrect). But maybe the pooled estimate is OK, Blattman does note that he has cold floor effects in his linear estimator – places with higher baselines have larger effects. This suggests Wilson’s log IRR estimator with the pooled data may be just fine and dandy for example.

Python code

Here is the python code in its entirety to generate the above two graphs. You can see the two functions to calculate the MDE given an alpha level and average crime counts in each area if you are planning your own study, the wdd_mde and lirr_mde functions.

'''
Estimating minimum detectable effect sizes
for place based crime interventions

Andy Wheeler
'''

import numpy as np
from scipy.stats import norm
import matplotlib
import matplotlib.pyplot as plt
import os
my_dir = r'D:\Dropbox\Dropbox\Documents\BLOG\min_det_effect'
os.chdir(my_dir)

#########################################################
#Settings for matplotlib

andy_theme = {'axes.grid': True,
              'grid.linestyle': '--',
              'legend.framealpha': 1,
              'legend.facecolor': 'white',
              'legend.shadow': True,
              'legend.fontsize': 14,
              'legend.title_fontsize': 16,
              'xtick.labelsize': 14,
              'ytick.labelsize': 14,
              'axes.labelsize': 16,
              'axes.titlesize': 20,
              'figure.dpi': 100}

matplotlib.rcParams.update(andy_theme)
#########################################################

#########################################################
# Functions for MDE for WDD and logIRR estimator


def wdd_mde(avg_counts,alpha=0.05,tails='two'):
    se = np.sqrt( avg_counts*4 )
    if tails == 'two':
        a = 1 - alpha/2
    elif tails == 'one':
        a = 1 - alpha
    z = norm.ppf(a)
    est = z*se
    return est

def lirr_mde(avg_counts,alpha=0.05,tails='two'):
    se = np.sqrt( (1/avg_counts)*4 )
    if tails == 'two':
        a = 1 - alpha/2
    elif tails == 'one':
        a = 1 - alpha
    z = norm.ppf(a)
    est = z*se
    return est

# Generating regular grid from 10 to 200
cnts = np.arange(10,201)
wmde1 = wdd_mde(cnts, tails='one')
wmde2 = wdd_mde(cnts)

imde1 = lirr_mde(cnts, tails='one')
imde2 = lirr_mde(cnts)

# Plot for WDD MDE
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(cnts, wmde1,color='k',linewidth=2, label='One-tailed')
ax.plot(cnts, wmde2,color='blue',linewidth=2, label='Two-tailed')
ax.set_axisbelow(True)
ax.set_xlabel('Average Number of Crimes in Treated/Control')
ax.set_ylabel('Crime Count Reduction')
ax.legend(loc='upper left')
plt.xticks(np.arange(0,201,20))
plt.yticks(np.arange(10,61,5))
plt.title("WDD MDE alpha level 0.05")
plt.savefig('WDD_MDE.png', dpi=500, bbox_inches='tight')

# Plot for IRR MDE
fig, ax = plt.subplots(figsize=(8,6))
ax2 = ax.secondary_yaxis("right", functions=(np.exp, np.log))
ax.plot(cnts,-1*imde1,color='k',linewidth=2, label='One-tailed')
ax.plot(cnts,-1*imde2,color='blue',linewidth=2, label='Two-tailed')
ax.set_axisbelow(True)
ax.set_xlabel('Average Number of Crimes in Treated/Control')
ax.set_ylabel('log IRR')
ax.set_ylim(-0.16, -1.34)
ax.legend(loc='upper right')
ax.set_yticks(-1*np.arange(0.2,1.31,0.1))
ax2.set_ylabel('IRR')
ax2.grid(False)
plt.xticks(np.arange(0,201,20))
plt.title("IRR MDE alpha level 0.05")
plt.savefig('IRR_MDE.png', dpi=500, bbox_inches='tight')

#########################################################

References

Using Random Forests in ArcPro to forecast hot spots

So awhile back had a request about how to use the random forest tool in ArcPro for crime prediction. So here I will show how to set up the data in a way to basically replicate how I used random forests in this paper, Mapping the Risk Terrain for Crime using Machine Learning. ArcPro is actually pretty nice to replicate how I set it up in that paper to do the models, but I will discuss some limitations at the end.

I am not sharing the whole project, but the data I use you can download from a prior post of mine, RTM Deep Learning style. So here is my initial data set up based on the spreadsheets in that post. So for original data I have crimes aggregated to street units in DC Prepped_Crime.csv (street units are midpoints in street blocks and intersections), and then point dataset tables of alcohol outlet locations AlcLocs.csv, Metro entrances MetroLocs.csv, and 311 calls for service Calls311.csv.

I then turn those original csv files into several spatial layers, via the display XY coordinates tool (these are all projected data FYI). On top of that you can see I have two different kernel density estimates – one for 311 calls for service, and another for the alcohol outlets. So the map is a bit busy, but above is the basic set of information I am working with.

For the crimes, these are the units of analysis I want to predict. Note that this vector layer includes spatial units of analysis even with 0 crimes – this is important for the final model to make sense. So here is a snapshot of the attribute table for my street units file.

Here we are going to predict the Viol_2011 field based on other information, both other fields included in this street units table, as well as the other point/kernel density layers. So while I imagine that ArcPro can predict for raster layers as well, I believe it will be easier for most crime analysts to work with vector data (even if it is a regular grid).

Next, in the Analysis tab at the top click the Tools toolbox icon, and you get a bar on the right to search for different tools. Type in random forest – several different tools come up (they just have slightly different GUI’s) – the one I showcase here is the Spatial Stats tools one.

So this next screenshot shows filling in the data to build a random forest model to predict crimes.

  1. in the input training features, put your vector layer for the spatial units you want to predict. Here mine is named Prepped_Crime_XYTableToPoint.
  2. Select the variable to predict, Viol_2011. The options are columns in the input training features layer.
  3. Explanatory Training Variables are additional columns in your vector layer. Here I include the XY locations, whether a street unit is an intersection, and then several different area variables. These variables are all calculated outside of this procedure.

Note for the predictions, if you just have 0/1 data, you can change the variable to predict as categorical. But later on in determining hotspots you will want to use the predicted probability from that output, not the binary final threshold.

For explanatory variables, here it is ok to use the XY coordinates, since I am predicting for the same XY locations in the future. If I fit a model for Dallas, and then wanted to make predictions for Austin, the XY inputs would not make sense. Finally it is OK to also include other crime variables in the predictions, but they should be lagged in time. E.g. I could use crimes in 2010 (either violent/property) to predict violent crimes in 2011. This dataset has crimes in 2012, and we will use that to validate our predictions in the end.

Then we can also include traditional RTM style distance and kernel density inputs as well into the predictions. So we then include in the training distance features section our point datasets (MetroLocs and AlcLocs), and in our training rasters section we include our two kernel density estimates (KDE_311 calls and KernelD_AlcL1 is the kernel density for alcohol outlets).

Going onto the next section of filling out the random forest tool, I set the output for a layer named PredCrime_Test2, and also save a table for the variable importance scores, called VarImport2. The only other default I change is upping the total number of trees, and click on Calculate Uncertainty at the bottom.

My experience with Random Forests, for binary classification problems, it is a good idea to set the minimum leaf size to say 50~100, and the depth of the trees to 5~10. But for regression problems, regulating the trees is not necessarily as big of a deal.

Click run, and then even with 1000 trees this takes less than a minute. I do get some errors about missing data (should not have done the kernel density masked to the DC boundary, but buffered the boundary slightly I think). But in the end you get a new layer, here it is named PredCrime_Test2. The default symbology for the residuals is not helpful, so here I changed it to proportional circles to the predicted new value.

So you would prioritize your hotspots based on these predicted high crime areas, which you can see in the screenshot are close to the historical ranks but not a 100% overlap. Also this provides a potentially bumpy (but mostly smoothed) set of predicted values.

Next what I did was a table join, so I could see the predicted values against the future 2012 violent crime data. This is just a snap shot, but see this blog post about different metrics you can use to evaluate how well the predictions do.

Finally, we saved the variable importance table. I am not a big fan of these, these metrics are quite volatile in my experience. So this shows the sidewalk area and kernel density for 311 calls are the top two, and the metro locations distance and intersection are at the bottom of variable importance.

But these I don’t think are very helpful in the end (even if they were not volatile). For example even if 311 calls for service are a good predictor, you can have a hot spot without a large number of 311 calls nearby (so other factors can combine to make hotspots have different factors that contribute to them being high risk). So I show in my paper linked at the beginning how to make reduced form summaries for hot spots using shapely values. It is not possible using the ArcPro toolbox (but I imagine if you bugged ESRI enough they would add this feature!).

This example is for long term crime forecasting, not for short term. You could do random forests for short term, such as predicting next week based on several of the prior weeks data. This would be more challenging to automate though in ArcPro environment I believe than just scripting it in R or python IMO. I prefer the long term forecasts though anyway for problem oriented strategies.

Health Insurance Claims Data via HMS Data Sharing for Researchers

I have been sharing this with a bunch of people recently so figured it would be appropriate to share on the blog. My company, HMS, which audits health insurance claims has a data sharing agreement for researchers.

So this provides access to micro level Medicaid health insurance claims for a set of states. It includes 10 states currently:

It provides a limited set of person level info, provider level info (e.g. the hospital location of the claim), as well as all the info that comes with the insurance claim itself. Mostly folks will be interested in ICD codes associated with the claim I imagine, as well as maybe the CPT codes. (CPT are for particular procedures, whereas ICD are more like broader diagnoses for the overall visit.)

It is only criminology adjacent, and is tough because the coverage is limited to Medicaid for some research designs. But examples criminology folks may be interested in are say you could look for domestic violence ICD codes, or look at provider level behavior for opioid prescriptions, or mental health treatment claims, etc.

One of the things with criminology research is it is very hard to identify the costs of crime. Looking at victimization costs via health insurance claims may be an underestimate, but has a pretty clear societal cost. And the limited coverage to Medicaid will make cost estimates on the low side (although more directly relevant to the state, and among the most vulnerable population).

The value of a PhD

For my current work as a data scientist, I spend most of my time writing SQL queries, generating some sort of predictive model on that data using python, and automating those data pipelines using additional command line scripts. Pretty much nothing coding wise I do on a day to day basis I learned in my entire educational career.

The only specific coding classes I took in school were SAS in undergrad and SPSS in grad. All other coding was in Stata and a very tiny bit in R, both incidental to statistical classes. Even those should hardly count, as all it entails is load a dataset and run reg y x or something similar.

That focuses on the software engineering side – the other side of being a data scientist is essentially being an applied mathematician. That may sound fancy, but the work I do I like to think is more akin to accounting with probabilities (where I have to personally create models to estimate the probabilities). While I had extensive quantitative training in graduate school, again nothing I was taught even remotely resembles the mathematics I use on a regular basis at my job.

My social science education entirely focused on causal inference, estimating parameters on the right hand side of the regression equation. I did not cover prediction/forecasting/machine-learning one iota in my classes. I did not even have any classes on cost-benefit analysis, which is more akin to me calculating potential return on investment when I am creating new machine learning models for my company.

The only thing I do regularly at my job you could reasonably point to specific educational training/prep on was presenting results in PowerPoint presentations.

That being said, no way I would be in my current position if I did not have a PhD. For a potential counter-factual, I debated on dropping out of undergrad at one point and going to community college to install HVAC systems. I feel pretty comfortable assuming I would not have ended up as a data scientist if I took that career path. (Before you think to poo-poo on that career path choice, it is easily possible my personal net worth would be in the same ballpark at this point in my life in that counter-factual installing HVAC world. There are significant opportunity costs you are eating when you pursue a PhD.)

So what exactly was the value of my PhD? While you take some classes as a PhD student, I don’t see the main benefit of those as being vocational in nature. When pursuing a PhD it is a full time endeavor, and it is the entire environment that marks it as a major difference from undergraduate education. Pretty much every conversation you have as a PhD student is focused on science.

A second major difference is that you are not a passive consumer of scientific research – you have bridged to becoming a producer of that knowledge. A PhD dissertation by its nature is very sink or swim – you are expected to come up with a particular research topic/agenda, and conduct the appropriate analysis to investigate that particular topic, then share your results with the world. This is very different than working in a job where someone tells you what to do – you show up in the morning and you have 100% latitude to pursue whatever you want.

These two things together I believe are where the value lies in a PhD. The independence necessary to be a successful in a PhD is by its nature not something you can get via prior work experience (unless you count say starting your own business). This coupled with the scientific environment provides an atmosphere where constant learning is necessary to get to the finish line of the dissertation. Even if I still was an academic, it is always necessary for me to consume new material, teach myself new things, and apply that to the work I am pursuing.

So while I did not learn python programming or machine learning in grad school, I just go out, try to consume as much as I can on the material, and apply that knowledge to solve the current problems I am dealing with. There will always be something new I need to teach myself while I am still working, but that is OK. I have the means to teach myself those things from my PhD experience. I am not sure I would have really ever gotten to that point just by focusing on vocational aspects (e.g. taking classes on machine learning or programming) – I think I only got to that point by having to pursue my own independent research.


I’ve been musing this more as potential students ask me whether it is worth it to pursue a PhD. I have mixed feelings, but have settled on this simple dichotomy – if you are only pursuing a PhD because you want to teach, I have grave reservations against recommending a PhD. The supply for these professor positions greatly outpace the demand from universities. So even if you do well as a student, there is no guarantee you will get a tenure track position. In the current market where there are dozens of really good candidates for any position, network effects can dominate that decision.

But, if you are more open to other potential positions, such as public sector researcher positions, think tanks, or private sector data science, I feel more comfortable in saying going for the PhD is a reasonable career choice.

Unfortunately, current education in terms of preparing you to be competitive for private sector data science is somewhat lacking across the social sciences. As I stated at the beginning of this post, I did not personally learn any of the tools I use regularly at my job via traditional education, but more as ancillary to my particular research interests. To follow in my path, the research you pursue needs to somewhat match the skills the current market wants, and these include:

  • predictive modeling (e.g. tree based models, boosted models, deep learning)
  • legitimate coding skills in python/R, as well as tools like git/Docker
  • working with moderately large datasets (SQL, Hadoop, or online AWS)
  • data visualization to explain results/models

I am hoping my former colleagues in social sciences will do a better job of expanding the graduate curricula to better teach these skills. They have utility for the more traditional research as well. I am not holding my breath though for that. So in the meantime if you are pursuing a PhD in the social sciences, and you want to pursue a data science job (or simply hedge in case you cannot land a tenure track gig), these are skills you need to develop on your own while also doing your PhD.

Updated SPSS Chart Template (V26) and Chart Notes

So I was helping someone out the other day with SPSS chart templates, and figured it would be a good opportunity to update mine. I have a new template version for V26, V26_ChartStyle.sgt. I also have some code to illustrate the template, plus a few random SPSS charting tips along the way.

For notes about chart templates, I have written about it previously, and Ruben has a nice tutorial on his site as well. For those not familiar, SPSS chart templates specify the default looks of the chart, very similar to CSS for HTML. So for example, if you want your labels to be a particular font, or you want the background for the chart to be light grey, or you want the gridlines to be dashed, are all examples you can specify in a chart template.

It is plain text XML file under the hood – unfortunately there is not any official documentation about what is valid from IBM SPSS that I am aware of, so to amend it just takes trial and error. So in case anyone from SPSS pays attention here, if you have other docs to note let me know!

Below I will walk through my updated template and various charts to illustrate the components of it.

Walkthrough Template Example

So first to start out, if you want to specify a new template, you can either do it via the GUI (Edit -> Options -> Charts Tab), or via syntax such as SET CTEMPLATE='data\template.sgt'. Here I make some fake data to illustrate various chart types.

SET SEED 10.
INPUT PROGRAM.
LOOP Id = 1 TO 20.
END CASE.
END LOOP.
END FILE.
END INPUT PROGRAM.
DATASET NAME Test.
COMPUTE Group = TRUNC(RV.UNIFORM(1,7)).
COMPUTE Pair = RV.BERNOULLI(0.5).
COMPUTE X = RV.UNIFORM(0,1).
COMPUTE Y = RV.UNIFORM(0,1).
COMPUTE Time = MOD(Id-1,10) + 1.
COMPUTE TGroup = TRUNC((Id-1)/10).
FORMATS Id Group Pair TGroup Time (F2.0) X Y (F2.1).
VALUE LABELS Group 
  1 'A' 
  2 'B'
  3 'C'
  4 'D'
  5 'E'
  6 'F'
.
VALUE LABELS Pair
  0 'Group One'
  1 'Group Two'
.
VALUE LABELS TGroup
 0 'G1'
 1 'G2'
.
EXECUTE.

Now to start out, I show off my new color palette using a bar chart. I use a color palette derived from a Van Gogh’s bedroom as my default set. An idea I got from Sidonie Christophe (and I used this palette generator website). I also use a monospace font, Consolas, for all of the text (SPSS pulls from the system fonts, and I am a windows guy). The default font sizes are too small IMO, especially for presentations, so the X/Y axes tick labels are 14pt.

*Bar graph to show colors, title, legend.
GGRAPH
  /GRAPHDATASET NAME="graphdataset" VARIABLES=Group MEAN(X)[name="MEAN"]
  /GRAPHSPEC SOURCE=INLINE.
BEGIN GPL
  SOURCE: s=userSource(id("graphdataset"))
  DATA: Group=col(source(s), name("Group"), unit.category())
  DATA: X=col(source(s), name("MEAN"))
  GUIDE: axis(dim(1), label("Group"))
  GUIDE: axis(dim(2), label("Mean X"))
  GUIDE: legend(aesthetic(aesthetic.color.interior), label("Group"))
  GUIDE: text.title(label("Main Title"))
  GUIDE: text.subtitle(label("Subtitle"))
  SCALE: cat(dim(1), include("1", "2", "3", "4", "5", "6"))
  SCALE: linear(dim(2), include(0))
  ELEMENT: interval(position(Group*X), shape.interior(shape.square), color.interior(Group))
END GPL.

The legend I would actually prefer to have an outline box, but it doesn’t behave so nicely when I use a continuous aesthetic and pads too much. I will end the blog post with things I wish I could do with templates and/or GPL. (I wished for documentation to help me with the template for legends for example, but wish I could specify the location of the legend in inline GPL code.)

The next chart shows a scatterplot. One thing I make sure in the template is to not prevent you specifying what you want in inline GPL that is possible. So for example you could specify a default scatterplot of size 12 and the inside is grey, but as far as I can tell that prevents you from changing the color later on. Also I show a trick with using the subtitle to make a Y axis label at the top of the chart, a trick I saw originally from Naomi Robbins. (She actually prefers the text orientation on the side running north/south if you do that approach, but I prevented that in this template, I really dislike having to turn my head to read it!)

* Scatterplot, showing Y axis trick with subtitle.
* Would prefer the setStyle subtype="simple" type="scatter" works as default.
GGRAPH
  /GRAPHDATASET NAME="graphdataset" VARIABLES=X Y
  /GRAPHSPEC SOURCE=INLINE
  /FITLINE TOTAL=NO.
BEGIN GPL
  SOURCE: s=userSource(id("graphdataset"))
  DATA: X=col(source(s), name("X"))
  DATA: Y=col(source(s), name("Y"))
  GUIDE: axis(dim(1), label("X Label"))
  GUIDE: axis(dim(2))
  GUIDE: text.subtitle(label("  Y Label"))
  ELEMENT: point(position(X*Y), size(size."12"), color.interior(color."bebebe"))
END GPL.

The next chart shows off my default data labels. I have the labels positioned at the center point, and also inherit the color from the data element. So one trick I like to use is to use the polygon element to explicitly set the locations of labels (and you can draw the polygons transparent, so you only see the labels in the end). So if you want to put labels at the top of bars (or above a line graph) like Excel does, you can just shift them up a smidge.

*Label trick for bar graphs.
GGRAPH 
  /GRAPHDATASET NAME="graphdataset" VARIABLES=Group COUNT()[name="COUNT"]
  /GRAPHSPEC SOURCE=INLINE. 
BEGIN GPL 
  SOURCE: s=userSource(id("graphdataset")) 
  DATA: Group=col(source(s), name("Group"), unit.category()) 
  DATA: COUNT=col(source(s), name("COUNT")) 
  TRANS: ext = eval(COUNT + 0.2)
  GUIDE: axis(dim(1)) 
  GUIDE: axis(dim(2)) 
  GUIDE: text.subtitle(label("Count")) 
  SCALE: cat(dim(1), include("1", "2", "3", "4", "5", "6")) 
  SCALE: linear(dim(2), include(0)) 
  ELEMENT: interval(position(Group*COUNT), color.interior(color."bebebe")) 
  ELEMENT: polygon(position(Group*ext), label(COUNT), color.interior(color.red),
                    transparency.interior(transparency."1"), transparency.exterior(transparency."1")) 
END GPL.

Next up is a histogram. SPSS has an annoying habit of publishing a statistics summary frame with mean/standard deviation when you make a histogram. Unfortunately the only solution I have found to this is to still generate the frame, but it is invisible so doesn’t show anything. It still takes up space in the chart area though, so the histogram will be shrunk to be somewhat smaller in width. Also I was unable to find a solution to not print the Frequency numbers with decimals here. (You can use setTickLabelFormat to say no decimals, but then that applies to all charts by default. SPSS typically chooses the numeric format from the data, but here since it calculates it inline GPL, there is no way to specify it in advance that I know of.)

* Histogram, no summary frame.
* Still not happy with Frequency numbers, stat summary is there but is empty.
GGRAPH
  /GRAPHDATASET NAME="graphdataset" VARIABLES=Y
  /GRAPHSPEC SOURCE=INLINE.
BEGIN GPL
  SOURCE: s=userSource(id("graphdataset"))
  DATA: Y=col(source(s), name("Y"))
  GUIDE: axis(dim(1), label("Y"))
  GUIDE: axis(dim(2), label("Frequency"))
  GUIDE: text.title(label("Simple Histogram of Y"))
  ELEMENT: interval(position(summary.count(bin.rect(Y, binWidth(0.1), binStart(-0.05) ))),
                    color.interior(color."bebebe"), color.exterior(color.white), 
                    transparency.interior(transparency."0.35"), transparency.exterior(transparency."0.35"))
END GPL.

The next few charts I will illustrate how SPSS pulls the axes label formats from the data itself. So first I create a few new variables for percentages, dollars, and long values with comma groupings. Additionally a new part of this template is I was able to figure out how to set the text style for small multiple panels (they are ticked like tick marks for X/Y axes). So this illustrates my favorite way to mark panels.

* Small multiple columns, illustrating different variable formats.
COMPUTE XPct = X*100.
COMPUTE YDollar = Y * 5000.
COMPUTE YComma = Y * 50000.
*Can also do PCT3.1, DOLLAR5.2, COMMA 4.1, etc.
FORMATS XPct (PCT3) YDollar (DOLLAR4) YComma (COMMA7).
EXECUTE. 

GGRAPH
  /GRAPHDATASET NAME="graphdataset" VARIABLES=XPct YDollar Pair
  /GRAPHSPEC SOURCE=INLINE
  /FITLINE TOTAL=NO.
BEGIN GPL
  SOURCE: s=userSource(id("graphdataset"))
  DATA: X=col(source(s), name("XPct"))
  DATA: Y=col(source(s), name("YDollar"))
  DATA: Pair=col(source(s), name("Pair"), unit.category())
  GUIDE: axis(dim(1), label("X Label"))
  GUIDE: axis(dim(2), label("Y Label"))
  GUIDE: axis(dim(3), opposite())
  ELEMENT: point(position(X*Y*Pair), size(size."12"), color.interior(color."bebebe"))
END GPL.

That is to make panels in columns, but you can also do it in rows. And again I prevent all the text I can from turning vertical up/down, and force it to write the text horizontally.

* Small multiple rows.
GGRAPH
  /GRAPHDATASET NAME="graphdataset" VARIABLES=X YComma Pair
  /GRAPHSPEC SOURCE=INLINE
  /FITLINE TOTAL=NO.
BEGIN GPL
  SOURCE: s=userSource(id("graphdataset"))
  DATA: X=col(source(s), name("X"))
  DATA: Y=col(source(s), name("YComma"))
  DATA: Pair=col(source(s), name("Pair"), unit.category())
  GUIDE: axis(dim(1), label("X Label"))
  GUIDE: axis(dim(2), label("Y Label"))
  GUIDE: axis(dim(4), opposite())
  ELEMENT: point(position(X*Y*1*Pair))
END GPL.

And finally if you do wrapped panels and set the facet label to opposite, it puts the grid label on the top of the panel.

* Small multiple wrap.
GGRAPH
  /GRAPHDATASET NAME="graphdataset" VARIABLES=X Y Group
  /GRAPHSPEC SOURCE=INLINE
  /FITLINE TOTAL=NO.
BEGIN GPL
  SOURCE: s=userSource(id("graphdataset"))
  DATA: X=col(source(s), name("X"))
  DATA: Y=col(source(s), name("Y"))
  DATA: Group=col(source(s), name("Group"), unit.category())
  COORD: rect(dim(1,2), wrap())
  GUIDE: axis(dim(1))
  GUIDE: axis(dim(2))
  GUIDE: axis(dim(3), opposite())
  ELEMENT: point(position(X*Y*Group))
END GPL.

Next up I show how my default colors do with line charts, I typically tell people to avoid yellow for lines, but this tan color does alright I think. (And Van Gogh was probably color blind, so it works out well for that.) I have not tested out printing in grey-scale, I don’t think it will work out well for that beyond just the first two colors.

* Multiple line chart (long format).
GGRAPH
  /GRAPHDATASET NAME="graphdataset" VARIABLES=Time Y TGroup
  /GRAPHSPEC SOURCE=INLINE.
BEGIN GPL
  SOURCE: s=userSource(id("graphdataset"))
  DATA: Time=col(source(s), name("Time"))
  DATA: Y=col(source(s), name("Y"))
  DATA: TGroup=col(source(s), name("TGroup"), unit.category())
  SCALE: linear(dim(1), min(1))
  GUIDE: axis(dim(1), delta(1))
  GUIDE: axis(dim(2))
  GUIDE: legend(aesthetic(aesthetic.color.interior))
  GUIDE: legend(aesthetic(aesthetic.size), label("Size Y"))
  GUIDE: text.subtitle(label("   Y"))
  SCALE: cat(aesthetic(aesthetic.color.interior), include("0", "1"))
  ELEMENT: line(position(Time*Y), color.interior(TGroup))
  ELEMENT: point(position(Time*Y), shape(TGroup), color.interior(TGroup),
                    color.exterior(color.white), size(size."10")) )
END GPL.

A nice approach that forgoes the legend though with line charts is to label the ends of the lines, and I show that below. Also the data above is in long format, and when superimposing points does not quite work out perfectly (G2 should always be above G1, but sometimes the G1 point is above the G2 line). Drawing each individual line in wide format though you can prevent that from happening (but results in more work to write the GPL, need several ELEMENT statements for a single line). I also show how to use splines here, which can sometimes help disentangle the spaghetti lines, but user beware, the interpolated spline values can be misleading (one of the reasons I like superimposing the point markers as well).

* Multiple line chart with end labels (wide format).
AGGREGATE OUTFILE=* MODE=ADDVARIABLES OVERWRITE=YES
  /BREAK
  /LastTime = MAX(Id).
IF Id = LastTime IdLabel = Id.
FORMATS IdLabel (F2.0).
EXECUTE.

GGRAPH
  /GRAPHDATASET NAME="graphdataset" VARIABLES=Id X Y IdLabel
   MISSING=VARIABLEWISE
  /GRAPHSPEC SOURCE=INLINE.
BEGIN GPL
  SOURCE: s=userSource(id("graphdataset"))
  DATA: Id=col(source(s), name("Id"))
  DATA: IdLabel=col(source(s), name("IdLabel")) 
  DATA: X=col(source(s), name("X"))
  DATA: Y=col(source(s), name("Y"))
  DATA: TGroup=col(source(s), name("TGroup"), unit.category())
  SCALE: linear(dim(1), min(1))
  SCALE: linear(dim(2), max(1.08))
  GUIDE: axis(dim(1), delta(1))
  GUIDE: axis(dim(2))
  SCALE: cat(aesthetic(aesthetic.color.interior), include("0", "1"))
  ELEMENT: line(position(smooth.spline(Id*Y)), color(color.red))
  ELEMENT: line(position(smooth.spline(Id*X)), color(color.blue))
  ELEMENT: point(position(Id*Y), color.interior(color.red), color.exterior(color.white),
                    size(size."9"), shape(shape.circle))
  ELEMENT: point(position(Id*X), color.interior(color.blue), color.exterior(color.white),
                    size(size."8"), shape(shape.square))
  ELEMENT: point(position(IdLabel*Y), color(color.red), label("Y"), 
                    transparency.interior(transparency."1"), transparency.exterior(transparency."1"))
  ELEMENT: point(position(IdLabel*X), color(color.blue), label("X"), 
                    transparency.interior(transparency."1"), transparency.exterior(transparency."1"))
END GPL.

A final example I illustrate is an error bar chart, but also include a few different notes about line breaks. You can use a special \n character in labels to break lines where you want. Also had a request for labeling the ends of the chart as well. Here I fudge that look by adding in a bunch of white space. This takes trial-and-error to figure out the right number of spaces to include, and can change if the Y axes labels change length, but is the least worst way I can think to do such a task. For error bars and bar graphs, it is also often easier to generate them going vertical, and just use COORD: transpose() to make them horizontal if you want.

* Error Bar chart.
VALUE LABELS Pair
  0 'Group\nOne'
  1 'Group\nTwo'
.
GGRAPH
  /GRAPHDATASET NAME="graphdataset" VARIABLES=Pair MEANCI(Y, 95)[name="MEAN_Y" LOW="MEAN_Y_LOW" 
    HIGH="MEAN_Y_HIGH"] MISSING=LISTWISE REPORTMISSING=NO
  /GRAPHSPEC SOURCE=INLINE.
BEGIN GPL
  SOURCE: s=userSource(id("graphdataset"))
  DATA: Pair=col(source(s), name("Pair"), unit.category())
  DATA: MEAN_Y=col(source(s), name("MEAN_Y"))
  DATA: LOW=col(source(s), name("MEAN_Y_LOW"))
  DATA: HIGH=col(source(s), name("MEAN_Y_HIGH"))
  COORD: transpose()
  GUIDE: axis(dim(1))
  GUIDE: axis(dim(2), label("Low Anchor                                                    High Anchor", "\nLine 2"))
  GUIDE: text.title(label("Simple Error Bar Mean of Y by Pair\n[Transposed]"))
  GUIDE: text.footnote(label("Error Bars: 95% CI"))
  SCALE: cat(dim(1), include("0", "1"), reverse() )
  SCALE: linear(dim(2), include(0))
  ELEMENT: interval(position(region.spread.range(Pair*(LOW+HIGH))), shape.interior(shape.ibeam))
  ELEMENT: point(position(Pair*MEAN_Y), size(size."12"))
END GPL.

Going to back to some scatterplots, I will illustrate a continuous legend example using size of points in a scatterplot. For size elements it typically makes sense to use a square root scale instead of a linear one (SPSS default power scale is to x^0.5, so a square root). If you go to edit the chart and click on the legend, you will see here what I mean by the excessive padding of white space at the bottom. (Also wish you could control the breaks that are shown in inline GPL, breaks for non-linear scales though are no doubt tricky.)

* Continuous legend example.
GGRAPH
  /GRAPHDATASET NAME="graphdataset" VARIABLES=X Y
  /GRAPHSPEC SOURCE=INLINE
  /FITLINE TOTAL=NO.
BEGIN GPL
  SOURCE: s=userSource(id("graphdataset"))
  DATA: X=col(source(s), name("X"))
  DATA: Y=col(source(s), name("Y"))
  GUIDE: axis(dim(1), label("X Label"))
  GUIDE: axis(dim(2))
  GUIDE: text.subtitle(label("  Y Label"))
  GUIDE: legend(aesthetic(aesthetic.size), label("SizeLab"))
  SCALE: linear(dim(2), max(1.05))
  SCALE: pow(aesthetic(aesthetic.size), aestheticMinimum(size."6px"), 
               aestheticMaximum(size."30px"), min(0), max(1))
  ELEMENT: point(position(X*Y), size(Y), color.interior(color."bebebe"))
END GPL.

And you can also do multiple legends as well. SPSS does a good job blending them together in this example, but to label the group you need to figure out which hierarchy wins first I guess.

* Multiple legend example.
GGRAPH
  /GRAPHDATASET NAME="graphdataset" VARIABLES=X Y Pair
  /GRAPHSPEC SOURCE=INLINE
  /FITLINE TOTAL=NO.
BEGIN GPL
  SOURCE: s=userSource(id("graphdataset"))
  DATA: X=col(source(s), name("X"))
  DATA: Y=col(source(s), name("Y"))
  DATA: Pair=col(source(s), name("Pair"), unit.category())
  GUIDE: axis(dim(1), label("X Label"))
  GUIDE: axis(dim(2))
  GUIDE: text.subtitle(label("  Y Label"))
  GUIDE: legend(aesthetic(aesthetic.color.interior), label("Color/Shape Lab"))
  GUIDE: legend(aesthetic(aesthetic.size), label("Size & Trans. Lab"))
  SCALE: linear(dim(2), max(1.05))
  SCALE: linear(aesthetic(aesthetic.size), aestheticMinimum(size."6px"), 
               aestheticMaximum(size."30px"), min(0), max(1))
  SCALE: linear(aesthetic(aesthetic.transparency.interior), aestheticMinimum(transparency."0"), 
               aestheticMaximum(transparency."0.6"), min(0), max(1))
  ELEMENT: point(position(X*Y), transparency.interior(Y), 
                            shape(Pair), size(Y), color.interior(Pair))
END GPL.

But that is not too shabby for just out of the box and no post-hoc editing.

Wishes for GPL and Templates

So I have put in my comments on things I wish I could do via chart templates. For a recap some documentation on what is possible, turning off the summary statistics for histograms entirely (not just making it invisible), padding for continuous legends, and making the setStyle defaults for the various charts styleOnly are a few examples. And then there are some parts I wish you could do in inline GPL, such as setting the location of the legend. Also I would not mind having access to any of these elements in inline GPL as well, such as say setting the number format in a GUIDE statement I think would make sense. (I have no idea how hard/easy these things though are under the hood.) But no documentation for templates is a real hassle when trying to edit your own.

Do y’all have any other suggestions for a chart template? Other default charts I should check out to see how my template does? Other suggested color scales? Let me know your thoughts!

Comparing the WDD vs the Wilson log IRR estimator

So this is maybe my final post on the WDD estimator for the time being (Wheeler & Ratcliffe, 2018). Recently David Wilson had an article in JQC that proposes a different estimator using the same basic information, just pre-post crime counts for treated and control areas (Wilson, 2021). So say we had the table:

         Pre   Post
Treated   50     30
Control   60     55

So in this scenario, my WDD estimate is -20 in the treated area, and -5 in the control area, so the overall estimate is -20 – -5 = -15.

30 - 50 - (55 - 60) = -15

So an estimated reduction of -15 crimes overall. David’s estimator is the logged incident rate ratio (IRR), and so is just like above, except logs all of the values:

log(30) - log(50) - ( log(55) - log(60) ) = -0.4238142

This is a logged incident rate adjustment, so most of the time people exponentiate this value, which is exp(-0.4238142) = 0.6545455. So this suggests crime is reduced by approximately 35% in the treated area relative to the control area in this hypothetical. Or another way to write it is (30/50)/(55/60) = 0.6545455.

So instead of a linear estimate of the total numbers of crimes reduced, this is an estimate of the overall rate reduction. So this begs the question when would you prefer my WDD vs the IRR? I will try to answer that below – in short I think David’s estimator makes sense for meta-analyses (as I have said before in reference to the work in Braga & Weisburd, 2020). But for an individual agency doing an experimental evaluation I much prefer my estimator. The skinny of this logic is that we only really care about the overall crime reduction estimate from a cost-benefit analysis perspective. Backing out this total crime reduction count estimate from David’s IRR estimate can result in some funny business for an individual study.

Identifying Assumptions

So there are really two different assumptions my WDD estimator and David’s IRR estimator make. To generate a standard error estimate around the point estimate for either estimator, both require the data are Poisson distributed. So that makes no difference between the two. The assumption that really distinguishes between the WDD and the IRR estimate is the parallel trends assumption. The WDD assumes parallel trends are on the linear scale, whereas the IRR assumes parallel trends are on the ratio scale.

What exactly does this mean? Imagine we have a treated and control area, but look at the crime trends per time period before the treatment occurred. This set of areas has a set of parallel trends on the linear scale:

Time Treated Control
 0     50      60
 1     40      50
 2     45      55
 3     50      60

When the treated area goes down by 10 crimes, the control area goes down by 10 crimes. That is a parallel on the linear scale. Whereas this scenario is parallel on the ratio scale:

Time Treated Control
 0     50      60
 1     40      48
 2     45      54
 3     50      60

When crime goes down by 20% in the treated area, it goes down by 20% in the control area.

So while this gives a potential way to say you should use the WDD (parallel on the linear scale), or the IRR (parallel on the ratio scale), in practice it is not so simple. For one thing, if you only has the pre/post counts of crime, you cannot distinguish between these two scenarios. You can only tell in the case you have historical data to examine.

For a second part of this, you typically can choose your own control area (see for example the synthetic control estimator). So in most scenarios you could choose a control area to obey the linear or the ratio parallel trends assumption if you wanted to. However it may be in many scenarios there is a natural/easy control area, and you may see what is a better fit in that case for linear/ratio.

A final wee bit of a perverse aspect about this I will mention – pretend we have a treated/control area have approximately the same baseline crime counts/rates:

Time Treated Control
 0      30     30
 1      25     25
 2      20     20
 3      25     25

You actually cannot tell in this scenario whether the parallel trends are on the linear scale for my WDD or the ratio scale for the IRR estimate. They are consistent with either! In practice I think in many cases it will be like this – with noisy data, if you choose a control area that has approximately the same baseline crime counts, it will be quite hard to tell whether the linear parallel trends makes more sense or the ratio parallel trends makes more sense.

There are situations where the linear changes do not make sense, but they tend to be scenarios such as the control area has very little crime (so cannot go below 0 to match larger ups/downs in the treated area). So in that case sure the IRR is plausible and the WDD is not, but those are cases where the control area itself is quite questionable. Also note the IRR is not defined for any cells with 0 crimes – but again it is not good for either of our estimators in that case (although mine won’t fail to spit out a number, the power is so low the number it spits out won’t be worth much).

Bias/Coverage

So I have adapted the same simulation code I used in prior studies/blog posts to evaluate the null distribution and the coverage of David’s IRR estimator. I partly did not pursue it initially back when me and Jerry were discussing this idea, because I thought it would be biased. Generalized linear models are based on maximum likelihood estimators, which are only asymptotically valid. In short it appears I was wrong here and David’s IRR estimator is fine even with just four observations, at least for the handful of scenarios I have tried it (have not looked at very tiny counts of crime, it is undefined if any cell has 0 crimes, as you cannot take the log of 0).

Python code pasted at the very end of the blog post, but for example if we generate a set of null no changes pre/post with a baseline of 50 crimes, the logged irr estimate (converted into a z-score here) is just fine and dandy and has a very close to standard normal distribution based on 10k simulations.

So lets look at the scenario where the control area doesn’t change, but the treated area goes from 50 to 30. We can see again the point estimate in this scenario is spot on the money.

And then we can see the coverage of the logged irr estimator is spot on as well:

So if you are interested in slightly different baseline scenarios, you can use that same simulation code to check out the behavior of David’s estimator and conduct simulated power analysis the same way I have shown for the WDD estimator in prior blog posts.

So if both are unbiased and have good coverage again, why would you prefer the WDD estimator over the IRR estimator (or vice-versa)? Well, lets take the 35% reduction I talked about at the beginning of the post, and the department needs to spend $250k on extra officers to conduct whatever hot spot policing intervention. A 35% reduction may be worth it if we start with a baseline of 200 crimes (so would expect to go down to 130, for a reduction of 70 crimes). If the baseline is 20 crimes, it goes down to 13 crimes (so only a reduction of 7 crimes). The actual benefit of the IRR estimate is entirely dependent on the baseline count of crimes it is applied to.

Even if the IRR estimate is itself unbiased and has proper coverage, for even an individual study backing out the estimated reduction in total crimes from the IRR is biased. So here in this same simulated data (50 to 30 in treated, and 50 to 50 in control areas). The true count reduction is -20, and here is the point estimate on the X axis and the length of the confidence interval for each simulation on the Y axis for my WDD test. You can see they are nicely centered on -20, and the length of the confidence intervals has a very tiny variance – they are mostly just a smidge over 50 in total length. So that is probably tough to wrap your head around, but the variance of the variance estimates for the WDD are small.

Now lets do the same graph for the IRR estimate, but translated back out to a count crime reduction based on the simulated values:

We either have a ton of bias in this estimate (if the estimate of the count reduction is too large, the confidence interval is too small). Or the opposite, the estimate of the count reduction is too small, and the confidence interval is crazy wide. In Andrew Gelman’s terminology, it can result in pretty large type M (magnitude) errors in this simulated example (Gelman & Carlin, 2014). So the variance of the variance estimates in this scenario are quite large.

To be clear – if you are interested in estimating a percent reduction, by all means use David’s IRR estimator. If you however want to translate that percent reduction into an estimate of the total crimes reduced though you should use my WDD estimator in that case. You should not back out a total crimes reduced estimate from the IRR.

Final Thoughts

So I have said a few times I think the IRR estimator makes more sense for meta-analyses. Why do I think that? Well, imagine we have an underlying causal process through which a hot spots policing experiment can randomly deter/prevent a particular proportion of crimes. That underlying causal process suggests an IRR effect. And also the problem I mention with translating back to crime counts I believe should get smaller with tighter estimates.

For a causal process that is more akin to my WDD estimator, imagine some crimes will always be deterred/prevented from a hot spots policing experiment, and some will never be. And we don’t know up-front which is which, so the observed reduction is based on whatever mixture of the two we have at that particular location.

The proportion reduction seems to make more sense to me for active patrol type interventions (which are ephemeral) vs permanent CPTED like interventions which should prevent certain criminal acts in perpetuity. But of course any situation in the real world could have both occurring at the same time.

When you go and look at the meta-analysis of hot spots policing, those interventions are all over the place (Hinkle et al., 2020). I think my WDD estimate would not make sense to mash up into a final meta-analytic estimate. The IRR may not make sense either in the end, but it is plausibly more relevant to compare the IRRs from a study with a baseline of 200 crimes vs one with 40 crimes at baseline. I am not sure it makes sense to compare WDDs in that scenario. But that being said, a few of my blog posts have discussed the WDD normalized per unit area or per unit time. Those normalized estimates are probably more apples to apples in the 200 vs 40 scenario.

A final note I have not discussed here is that David discusses a correction for overdispersion, so that is a potential feather in the cap for his estimator vs the WDD. I’d be a bit hesitant though with that – only four observations to estimate the dispersion term is slicing it a bit thin IMO. But I was wrong about the original estimator, so I may be wrong about that as well. It will take simulation evidence to determine that though – David’s paper just provides the correction term, he doesn’t provide evidence for its utility with small sample data.

And to be fair I have not done simulations to see how my estimator behaves in the presence of overdispersion either. I believe it will simply just cause the standard errors to be too small, so like in Wheeler (2016), I imagine it will just require upping the interval (e.g. use a z-score of 3 instead of 2) to get proper coverage for real crime data.

References

Other Posts of Interest

Python simulation code

Here is a copy-pasted chunk of the entire python simulation code.

'''
Comparing WDD to log(IRR) from Wilson's
recent paper, https://link.springer.com/article/10.1007/s10940-021-09494-w

Andy Wheeler
'''

import pandas as pd
import numpy as np
from scipy.stats import norm
from scipy.stats import poisson
from scipy.stats import uniform
import matplotlib
import matplotlib.pyplot as plt
import os
my_dir = r'D:\Dropbox\Dropbox\Documents\BLOG\wdd_vs_irr'
os.chdir(my_dir)

#########################################################
#Settings for matplotlib

andy_theme = {'axes.grid': True,
              'grid.linestyle': '--',
              'legend.framealpha': 1,
              'legend.facecolor': 'white',
              'legend.shadow': True,
              'legend.fontsize': 14,
              'legend.title_fontsize': 16,
              'xtick.labelsize': 14,
              'ytick.labelsize': 14,
              'axes.labelsize': 16,
              'axes.titlesize': 20,
              'figure.dpi': 100}

matplotlib.rcParams.update(andy_theme)
#########################################################


#This works for the scipy functions as well
np.random.seed(seed=10)

# A function to generate the WDD estimate for simulated data
def wdd_sim(treat0,treat1,cont0,cont1,pre,post):
    tr_cr_0 = poisson.rvs(mu = treat0, size=int(pre)).sum()
    co_cr_0 = poisson.rvs(mu = cont0, size=int(pre)).sum()
    tr_cr_1 = poisson.rvs(mu = treat1, size=int(post)).sum()
    co_cr_1 = poisson.rvs(mu = cont1, size=int(post)).sum()
    # WDD estimates
    est = ( tr_cr_1/post - tr_cr_0/pre ) - ( co_cr_1/post - co_cr_0/pre )
    post2 = (1/post)**2
    pre2 = (1/pre)**2
    var_est = tr_cr_0*pre2 + tr_cr_1*post2 + co_cr_0*pre2 + co_cr_1*post2
    true_val = ( treat1 - treat0 ) - ( cont1 - cont0 )
    z_score = est / np.sqrt(var_est)
    # Wilson log IRR estimates
    true_logirr = np.log( (treat1*cont0) / (cont1*treat0) )
    est_logirr = np.log( ((tr_cr_1/post)*(co_cr_0/pre)) / ( (co_cr_1/post)*(tr_cr_0/pre) ) )
    se_logirr = np.sqrt( 1/tr_cr_1 + 1/co_cr_0 + 1/co_cr_1 + 1/tr_cr_0 )
    z_logirr = est_logirr / se_logirr
    return (tr_cr_0, co_cr_0, tr_cr_1, co_cr_0, est, var_est, true_val, z_score, true_logirr, est_logirr, se_logirr, z_logirr)

def make_data(n, treat0, treat1, cont0, cont1, pre, post):
    base = pd.DataFrame( range(n), columns=['index'])
    base['treat0'] = treat0
    if treat1 is not None:
        base['treat1'] = treat1
    else:
        base['treat1'] = base['treat0']
    if cont0 is not None:
        base['cont0'] = cont0
    else:
        base['cont0'] = base['treat0']
    if cont1 is not None:
        base['cont1'] = cont1
    else:
        base['cont1'] = base['cont0']
    base.drop(columns='index',inplace=True)
    base['pre'] = pre
    base['post'] = post
    sim_vals = base.apply(lambda x: wdd_sim(**x), axis=1, result_type='expand')
    sim_vals.columns = ['sim_t0','sim_c0','sim_t1','sim_c1','est','var_est','true_val','z_score',
                        'true_logirr','est_logirr','se_logirr','z_logirr']
    return pd.concat([base,sim_vals], axis=1)

# Coverage of the log irr estimate
# Lets look at the coverage rate for a decline from 40 to 20
def cover_logirr(data, ci=0.95):
    mult = (1 - ci)/2
    nv = norm.ppf(1 - mult)
    dif = nv*data['se_logirr']
    low = data['est_logirr'] - dif
    high = data['est_logirr'] + dif
    cover = ( data['true_logirr'] > low) & ( data['true_logirr'] < high )
    return cover

# Length of ci for WDD
def len_ci(data, ci=0.95):
    mult = (1 - ci)/2
    nv = norm.ppf(1 - mult)
    dif = nv*np.sqrt( data['var_est'] )
    low = data['est'] - dif
    high = data['est'] + dif
    return low, high, high - low

# Length of ci for IRR estimate on count scale
# This depends on the baseline estimate to multiply
# The IRR by, using the baseline average of the 
# Treatment area

def len_irr(data, ci=0.95):
    mult = (1 - ci)/2
    nv = norm.ppf(1 - mult)
    dif = nv*data['se_logirr']
    low = data['est_logirr'] - dif
    high = data['est_logirr'] + dif
    baseline = data['sim_t0']/data['pre']
    # Even if you use hypothetical, the variance is quite high
    #baseline = data['treat0']
    est_count = baseline*np.exp(data['est_logirr']) - baseline
    c1 = baseline*np.exp(low) - baseline
    c2 = baseline*np.exp(high) - baseline
    return est_count, c1, c2, np.abs(c2 - c1)

##########################
# Example with no change, lets look at the null distribution
sim_n = 10000
no_diff = make_data(sim_n, 50, 50, 50, 50, 1, 1)
no_diff['z_logirr'].describe()
##########################

##########################
# Example with equal time periods, a reduction from 50 to 30 and 50 to 50 in control area
sim_dat = make_data(sim_n, 50, 30, 50, 50, 1, 1)
sim_dat[['true_logirr','est_logirr','se_logirr']].describe()

cl = cover_logirr(sim_dat)
cl.mean()

# Compare length of CI for IRR vs WDD

# WDD length
lowdd, highwdd, lwdd = len_ci(sim_dat)
lwdd.describe()

# IRR length on the count scale
est_cnt_irr, lo_irr, hi_irr, ln_irr = len_irr(sim_dat)
ln_irr.describe()

# Scatterplot of estimated count reduction vs
# Length of CI
fig, ax = plt.subplots(figsize=(8,6))
ax.scatter(est_cnt_irr, ln_irr, c='k', 
            alpha=0.1, s=4)
ax.set_axisbelow(True)
ax.set_xlabel('Estimated Count Reduction [IRR]')
ax.set_ylabel('Length of CI on count scale [IRR]')
plt.savefig('IRR_Len_Est.png', dpi=500, bbox_inches='tight')
plt.show()

# Lets compare to the WDD estimate
fig, ax = plt.subplots(figsize=(8,6))
ax.scatter(sim_dat['est'], lwdd, c='k', 
            alpha=0.1, s=4)
ax.set_axisbelow(True)
ax.set_xlabel('Estimated Count Reduction [WDD]')
ax.set_ylabel('Length of CI on count scale [WDD]')
plt.savefig('WDD_Len_Est.png', dpi=500, bbox_inches='tight')
plt.show()
##########################