Adding Cost Functions to ROCR performance objects

In my last post, I gave an introduction of the ROCR package and how to use it for ROC analysis.

In the ROCR reference manual, it states “new performance measures can be added using a standard interface”, but I have not found that to be so. I may have missed some crucial step, but others have tried to adapt new performance measures. One example I came across had “patched” the performance code to use a new performance measure wss (Work Saved over Sampling). I liked some parts of what they did, but wanted to add my own measure and allow for a user to pass a new measure into a function without having to re-copy all the code.

Dice

I wanted to add an overlap measure known as the Dice coefficient, aka Dice Similarity Index (DSI), or Sorensen-Dice Coefficient. Let's define TP to be the number of true positives, TN to be true negatives, FP to be false positives, and FN to be false negatives, and RN/RP to be row negatives/positives and CN/CP be column negatives/positives. Then our 2-by-2 table of predictions vs. true values is:

0 1 Total
0 TN FP RN
1 FN TP RP
Total CN CP N

Let's say rows are predictions and columns are true values, but these are interchangeable.

The Dice coefficient is defined in terms of these values as:
\frac{2 \times TP}{2\times TP + FP + FN} = \frac{2 \times TP}{RP + CP}

In every prediction object, there are slots for tp, tn, fp, and fn (see ?"prediction-class" for more). Therefore, I can simply take these slots to make my Dice coefficient. Here's how I did it:

dice <- function(prediction.obj){
    if (class(prediction.obj) != "prediction") {
        stop(paste("Wrong argument types: First argument must be of type", 
            "'prediction'"))    
    }
    argnames <- c()
    x.values <- list()
    y.values <- list()
    for (i in 1:length(prediction.obj@predictions)) {
      fp = prediction.obj@fp[[i]] 
        tp = prediction.obj@tp[[i]]
        fn = prediction.obj@fn[[i]]
        tn = prediction.obj@tn[[i]]
        cutoffs = prediction.obj@cutoffs[[i]]
        meas_dice = 2 * tp / (2*tp + fp + fn)
        x.values <- c(x.values, list(cutoffs))
        y.values <- c(y.values, list(meas_dice))
    }
    if (!(length(x.values) == 0 || length(x.values) == length(y.values))) {
        stop("Consistency error.")
    }
    return(new("performance", x.name = "cutoff", 
        y.name = "dice", 
        alpha.name = "none", 
        x.values = x.values, y.values = y.values, 
        alpha.values = list())
    )
}

Essentially, I copied the performance function from ROCR, made some adjustments and put in my calculation (into the object meas_dice) in there. That's great! Now I have this handy function to use when I want.

A more general solution

Although this solved my current problem, I thought more about how to add more cost functions in a more general way.

Here is my solution:

# copied from original function
myperformance <- function (prediction.obj, measure, 
    x.measure = "cutoff", ...)
{
    envir.list <- my.define.environments(...)
    long.unit.names <- envir.list$long.unit.names
    function.names <- envir.list$function.names
    obligatory.x.axis <- envir.list$obligatory.x.axis
    optional.arguments <- envir.list$optional.arguments
    default.values <- envir.list$default.values
    if (class(prediction.obj) != "prediction"){
              stop(paste("Wrong argument types: First argument must be of type",
            "'prediction'"))
    }
    if (!exists(measure,  where = long.unit.names, inherits = FALSE)){
      stop(paste("Measure", measure, "not found"))
    }
    if (!exists(x.measure,  where = long.unit.names, inherits = FALSE)){
      stop(paste("Measure", measure, "not found"))
    }    
    if (exists(x.measure, where = obligatory.x.axis, inherits = FALSE)) {
        message <- paste("The performance measure", x.measure,
            "can only be used as 'measure', because it has",
            "the following obligatory 'x.measure':\n", get(x.measure,
                envir = obligatory.x.axis))
        stop(message)
    }
    if (exists(measure, where = obligatory.x.axis, inherits = FALSE)) {
        x.measure <- get(measure, envir = obligatory.x.axis)
    }
    if (x.measure == "cutoff" || exists(measure, where = obligatory.x.axis,
        inherits = FALSE)) {
        optional.args <- list(...)
        argnames <- c()
        if (exists(measure, where = optional.arguments, inherits = FALSE)) {
            argnames <- get(measure, envir = optional.arguments)
            default.arglist <- list()
            for (i in 1:length(argnames)) {
                default.arglist <- c(default.arglist, get(paste(measure,
                  ":", argnames[i], sep = ""), envir = default.values,
                  inherits = FALSE))
            }
            names(default.arglist) <- argnames
            for (i in 1:length(argnames)) {
                templist <- list(optional.args, default.arglist[[i]])
                names(templist) <- c("arglist", argnames[i])
                optional.args <- do.call(".farg", templist)
            }
        }
        optional.args <- .select.args(optional.args, argnames)
        function.name <- get(measure, envir = function.names)
        x.values <- list()
        y.values <- list()
        for (i in 1:length(prediction.obj@predictions)) {
            argumentlist <- .sarg(optional.args, predictions = prediction.obj@predictions[[i]],
                labels = prediction.obj@labels[[i]], cutoffs = prediction.obj@cutoffs[[i]],
                fp = prediction.obj@fp[[i]], tp = prediction.obj@tp[[i]],
                fn = prediction.obj@fn[[i]], tn = prediction.obj@tn[[i]],
                n.pos = prediction.obj@n.pos[[i]], n.neg = prediction.obj@n.neg[[i]],
                n.pos.pred = prediction.obj@n.pos.pred[[i]],
                n.neg.pred = prediction.obj@n.neg.pred[[i]])
            ans <- do.call(function.name, argumentlist)
            if (!is.null(ans[[1]]))
                x.values <- c(x.values, list(ans[[1]]))
            y.values <- c(y.values, list(ans[[2]]))
        }
        if (!(length(x.values) == 0 || length(x.values) == length(y.values))) {
            stop("Consistency error.")
        }
        return(new("performance", x.name = get(x.measure, envir = long.unit.names),
            y.name = get(measure, envir = long.unit.names), alpha.name = "none",
            x.values = x.values, y.values = y.values, alpha.values = list()))
    }
    else {
        perf.obj.1 <- myperformance(prediction.obj, measure = x.measure,
            ...)
        perf.obj.2 <- myperformance(prediction.obj, measure = measure,
            ...)
        return(.combine.performance.objects(perf.obj.1, perf.obj.2))
    }
}

What is all this code?

First off, myperformance is exactly the code from the performance function in ROCR, except the first line is:

envir.list <- my.define.environments(...)

instead of this line from ROCR::performance

envir.list <- .define.environments()

Note that my.define.environments takes arguments, whereas .define.environments does not. This is a crucial difference; this is where you put your measure's code.

New Environments

If you look at the code for .define.environments:

library(ROCR)
head(.define.environments)
1 function ()                                            
2 {                                                      
3     long.unit.names <- new.env()                       
4     assign("none", "None", envir = long.unit.names)    
5     assign("cutoff", "Cutoff", envir = long.unit.names)
6     assign("acc", "Accuracy", envir = long.unit.names) 

we see the code new.env() being executed. In the beginning of the function, it defines the long.unit.names environment as well as other environments. So every time ROCR::performance is called, it creates a new environment with the names of the measures and functions ROCR uses. This is important since we cannot assign new names and objects to an existing, fixed environment or namespace like we could in other scenarios. Hence why I created my.define.environments:

my.define.environments <- function(funnames = NULL, # name of measure
    longnames = NULL, # name of actual thing
    exprs = NULL, # list of 2 character vectors to be expressed
    optargs, # list
    default.vals,
    xaxis
    )
{
  # get original environments
  envir.list <- ROCR::.define.environments()
  long.unit.names = envir.list$long.unit.names
  function.names = envir.list$function.names   
  obligatory.x.axis = envir.list$obligatory.x.axis
  optional.arguments = envir.list$optional.arguments         
  default.values = envir.list$default.values

    .performance.dice <- function (predictions, labels, cutoffs, fp, 
        tp, fn, tn, n.pos, 
        n.neg, n.pos.pred, n.neg.pred) {
        list(cutoffs, 2 * tp / (2*tp + fp + fn))

    }  

    assign("dice", .performance.dice, 
        envir=function.names)

    assign("dice", "Sorensen-Dice coefficient", 
        envir=long.unit.names)    

    #######################################
    # Allow for general adding
    #######################################
    if (!is.null(funnames)){
        stopifnot(
            length(funnames) == length(longnames) &&
            length(funnames) == length(exprs)
            )
        if (!missing(optargs)){
          stopifnot(length(optargs) == length(funnames))
        }
        if (!missing(optargs)){
          stopifnot(length(default.vals) == length(funnames))
        } 
        if (!missing(xaxis)){
          stopifnot(length(xaxis) == length(funnames))
        }       

        for (iname in seq_along(funnames)){  
          ie1 = exprs[[iname]][[1]]
          ie2 = exprs[[iname]][[2]]
          funcname = paste0("func <- function (predictions, labels, 
                            cutoffs, fp, 
                            tp, fn, tn, n.pos, 
                            n.neg, n.pos.pred, n.neg.pred) {
            list(", ie1, ", ", ie2, ") }")
          eval(parse(text=funcname))

            assign(funnames[iname], func, 
                envir=function.names)
            assign(funnames[iname], longnames[iname],
                envir=long.unit.names)

            #############
            # optional arguments
            #############
            if (!missing(optargs)){
              oargs = optargs[[iname]]
              for (ioarg in seq_along(oargs)){
                assign(oargs[[ioarg]][[1]], oargs[[ioarg]][[2]],
                  envir=optional.arguments)
              }
            }

            #############
            # Default values
            #############
            if (!missing(default.vals)){
              oargs = default.vals[[iname]]
              for (ioarg in seq_along(oargs)){
                assign(oargs[[ioarg]][[1]], oargs[[ioarg]][[2]],
                  envir=default.values)
              }
            }

            if (!missing(default.vals)){
              oargs = default.vals[[iname]]
              for (ioarg in seq_along(oargs)){
                assign(oargs[[ioarg]][[1]], oargs[[ioarg]][[2]],
                  envir=obligatory.x.axis)
              }
            }            

        }
    } # is is.null

  list(
    long.unit.names = long.unit.names,
    function.names = function.names,
    obligatory.x.axis = obligatory.x.axis,
    optional.arguments = optional.arguments,
    default.values = default.values
  )
}

We see that my.define.environments creates new environments too though! Yes, my.define.environments essentially does the same thing, but I can add my dice functiont inside my.define.environments and this measure can then be used in future work in other projects by using the same code. Moreover, the fact that arguments can be passed into my.define.environments allows you to create a measure on-the-fly.

Below is an example of how you can use custom measures based on the code above.

Example

Here I will plot the Jaccard Index, which is not implemented in the performance function. The Jaccard index formula is similar to dice and is represented as:

\frac{TP}{TP + FP + FN}

We can implement this cost function by supplying our measure, which must match our function name in funnames, the human-readable name in longnames and a list of 2-element character vectors in exprs. For scalar measures, the first element is "cutoffs" and the second element is the expression (to be evaluated by R) of the measure to be used.

data(ROCR.simple)
pred <- prediction(ROCR.simple$predictions,ROCR.simple$labels)
perf.jaccard = myperformance(pred, 
                             measure = "jaccard", 
              funnames = "jaccard", 
              longnames="Jaccard Index", 
              exprs = list(c("cutoffs", "tp / (tp + fp + fn)")))
plot(perf.jaccard)

plot of chunk unnamed-chunk-6

Viola! We now have a way to create any cost function (that can be formulated in the terms of the objects of a prediction object).

Here is the example with using Dice:

perf.dice = myperformance(pred, measure = "dice")
plot(perf.dice)

plot of chunk unnamed-chunk-7

As we already added .performance.dice to my.define.environments, we can simply call it as a measure.

Passing in 2 new measures:

The length of funnames must be the same as that of longnames and exprs (exprs must be a list). You can pass in vectors of funnames and longnames and a list of exprs so that you define multiple measures.
And we can pass in 2 new measures and get a performance object of them. In these cases, you will likely only want to pass in a maximum of 2 measures as a performance object will only compute 2 outputs for the x.values and y.values slots.

perf.both = myperformance(pred, x.measure = "dice", 
                             measure = "jaccard", 
                             funnames = c("dice", "jaccard"), 
                            longnames=c("Dice Index", "Jaccard Index"), 
              exprs = list(c("cutoffs", "2 * tp / (2*tp + fp + fn)"),
                           c("cutoffs", "tp / (tp + fp + fn)")))

plot(perf.both)

plot of chunk unnamed-chunk-8

If you look closely, you'll see there is some odd plotting in the upper right tail of the function. The functions may not be monotonic when you get them out of the performance object, so you may want to sort by the x measure first for plotting purposes:

both = data.frame(cbind(x= perf.both@x.values[[1]], y = perf.both@y.values[[1]]))
both = both[ order(both$x), ]
colnames(both) = c(perf.both@x.name, perf.both@y.name)
plot(both, type="l")

plot of chunk unnamed-chunk-9

Conclusion

Overall, you can add new measures to the performance object in R using the code above. It's a shame that the package is orphaned; I like using it for many ROC functions and measure computations. Then again, I'm not volunteering to maintain it. Although the package says new performance measures can be added using a standard interface”, I could not find a way to do so. Hopefully the code above allows you to implement a new measure if you ever choose to do so. Have fun ROC’ing around the Christmas tree! Boom! – You just got punned.

A small introduction to the ROCR package

I've been doing some classification with logistic regression in brain imaging recently. I have been using the ROCR package, which is helpful at estimating performance measures and plotting these measures over a range of cutoffs.

The prediction and performance functions are the workhorses of most of the analyses in ROCR I've been doing. For those who haven't used ROCR before, the format of the prediction function is:

prediction(predictions, labels, label.ordering = NULL)

where predictions are some predicted measure (usually continuous) for the “truth”, which are the labels. In many applications, predictions are estimated probabilities (or log odds) and the labels are binary values. Both arguments can take a vector, matrix, or data.frame for prediction, but dim(predictions) must equal dim(labels).

In this post, I'll go through creating prediction and performance objects and extracting the results.

Prediction objects

Simple example: one set of prediction and labels

Let's show a simple example from the prediction help file, that uses a prediction and label vector (i.e. not a matrix). We see the data is some continuous prediction and binary label:

library(ROCR)
data(ROCR.simple)
head(cbind(ROCR.simple$predictions, ROCR.simple$labels), 5)
          [,1] [,2]
[1,] 0.6125478    1
[2,] 0.3642710    1
[3,] 0.4321361    0
[4,] 0.1402911    0
[5,] 0.3848959    0

Now, let's makde the prediction object and show its contents:

pred <- prediction(ROCR.simple$predictions,ROCR.simple$labels)
class(pred)
[1] "prediction"
attr(,"package")
[1] "ROCR"
slotNames(pred)
 [1] "predictions" "labels"      "cutoffs"     "fp"          "tp"         
 [6] "tn"          "fn"          "n.pos"       "n.neg"       "n.pos.pred" 
[11] "n.neg.pred" 

We see the the returned result of prediction is an object of class prediction, which an S4 object with a series of slots. Let's look at the length of each slot and the class:

sn = slotNames(pred)
sapply(sn, function(x) length(slot(pred, x)))
predictions      labels     cutoffs          fp          tp          tn 
          1           1           1           1           1           1 
         fn       n.pos       n.neg  n.pos.pred  n.neg.pred 
          1           1           1           1           1 
sapply(sn, function(x) class(slot(pred, x)))
predictions      labels     cutoffs          fp          tp          tn 
     "list"      "list"      "list"      "list"      "list"      "list" 
         fn       n.pos       n.neg  n.pos.pred  n.neg.pred 
     "list"      "list"      "list"      "list"      "list" 

We see that each slot has length 1 and is a list.

Example: multiple sets of prediction and labels

Let's use the ROCR.hiv dataset to show how this works if more than one set of predictions and labels are supplied. Here we pass a list of \(10\) predictions and a list of labels to the prediction function:

data(ROCR.hiv)
manypred = prediction(ROCR.hiv$hiv.nn$predictions, ROCR.hiv$hiv.nn$labels)
sapply(sn, function(x) length(slot(manypred, x)))
predictions      labels     cutoffs          fp          tp          tn 
         10          10          10          10          10          10 
         fn       n.pos       n.neg  n.pos.pred  n.neg.pred 
         10          10          10          10          10 
sapply(sn, function(x) class(slot(manypred, x)))
predictions      labels     cutoffs          fp          tp          tn 
     "list"      "list"      "list"      "list"      "list"      "list" 
         fn       n.pos       n.neg  n.pos.pred  n.neg.pred 
     "list"      "list"      "list"      "list"      "list" 

We see that all the slots are still lists, but now they have length \(10\), corresponding to the \(10\) predictions/labels. We would get the same result if the 2 arguments were matrices, but that would require all predictions and labels to have the same length. Using a list of predictions/labels is a bit more flexible.

Performance objects

From the help file of performance, the syntax for this function is:

performance(prediction.obj, measure, x.measure="cutoff", ...)

We see that the first argument is a prediction object, and the second is a measure. If you run ?performance, you can see all the performance measures implemented.

We will do example of some commonly estimated measures: receiver operating characteristic (ROC) curves, accuracy, area under the curve (AUC), and partial AUC (pAUC).

ROC Curve

Simple example: one set of prediction and labels

We will do an ROC curve, which plots the false positive rate (FPR) on the x-axis and the true positive rate (TPR) on the y-axis:

roc.perf = performance(pred, measure = "tpr", x.measure = "fpr")
plot(roc.perf)
abline(a=0, b= 1)

plot of chunk roc_pred

At every cutoff, the TPR and FPR are calculated and plotted. The smoother the graph, the more cutoffs the predictions have. We also plotted a 45-degree line, which represents, on average, the performance of a Uniform(0, 1) random variable. The further away from the diagonal line, the better. Overall, we see that we see gains in sensitivity (true positive rate, > 80%), trading off a false positive rate (1- specificity), up until about 15% FPR. After an FPR of 15%, we don't see significant gains in TPR for a tradeoff of increased FPR.

Example: multiple sets of prediction and labels

The same can be done if you have many predictions and labels:

many.roc.perf = performance(manypred, measure = "tpr", x.measure = "fpr")
plot(many.roc.perf, col=1:10)
abline(a=0, b= 1)

plot of chunk roc_preds

Essentially, the plot function on a performance object with multiple predictions and labels will loop over the lists and plot the ROC for each one.

Overall, we see the performance of each prediction is similar. The pROC package, described in the conclusion, can test the performance between ROC curves.

Advanced: If you want to see how performance objects are plotted, use getMethod("plot", signature = c(x="performance",y="missing")) and ROCR:::.plot.performance.

Limiting to a FPR: partial ROC curve

You may only want to accept a false positive rate of a certain level, let's say 10%. The function pROC below will only keep values less than or equal to the FPR you set.

pROC = function(pred, fpr.stop){
    perf <- performance(pred,"tpr","fpr")
    for (iperf in seq_along(perf@x.values)){
        ind = which(perf@x.values[[iperf]] <= fpr.stop)
        perf@y.values[[iperf]] = perf@y.values[[iperf]][ind]
        perf@x.values[[iperf]] = perf@x.values[[iperf]][ind]
    }
    return(perf)
}

Let's use this on the simple cases and plot the partial ROC curve:

proc.perf = pROC(pred, fpr.stop=0.1)
plot(proc.perf)
abline(a=0, b= 1)

plot of chunk pROC

Thus, if we can only accept a FPR of 10%, the model is only giving 50% sensitivity (TPR) at 10% FPR (1-specificity).

Getting an “optimal” cut point

In some applications of ROC curves, you want the point closest to the TPR of \(1\) and FPR of \(0\). This cut point is “optimal” in the sense it weighs both sensitivity and specificity equally. To deterimine this cutoff, you can use the code below. The code takes in BOTH the performance object and prediction object and gives the optimal cutoff value of your predictions:

opt.cut = function(perf, pred){
    cut.ind = mapply(FUN=function(x, y, p){
        d = (x - 0)^2 + (y-1)^2
        ind = which(d == min(d))
        c(sensitivity = y[[ind]], specificity = 1-x[[ind]], 
            cutoff = p[[ind]])
    }, perf@x.values, perf@y.values, pred@cutoffs)
}
print(opt.cut(roc.perf, pred))
                 [,1]
sensitivity 0.8494624
specificity 0.8504673
cutoff      0.5014893

Now, there is a cost measure in the ROCR package that you can use to create a performance object. If you use it to find the minimum cost, then it will give you the same cutoff as opt.cut, but not give you the sensitivity and specificity.

cost.perf = performance(pred, "cost")
pred@cutoffs[[1]][which.min(cost.perf@y.values[[1]])]
[1] 0.5014893

Different costs for FP and FN

The output from opt.cut and a performance object with measure cost are NOT equivalent if false positives and false negatives are not weighted equally. The cost.fn and cost.fp arguments can be passed to performance, corresponding to the cost of a false negative and false positive, respectively. Let's say false positives are twice as costly as false negatives, and let's get a cut point:

cost.perf = performance(pred, "cost", cost.fp = 2, cost.fn = 1)
pred@cutoffs[[1]][which.min(cost.perf@y.values[[1]])]
[1] 0.5294022

Thus, we have a different “optimal” cut point with this changed cost function. In many real-life applications of biomarkers, the cost of a false positive and false negative are not the same. For example, missing someone with a disease based on a test may cost a hospital $1,000,000 in lawsuits, but treating someone who did not have the disease may cost $100,000 in treatments. In that case, the cost of a false negative is 10 times that of a false positive, strictly in monetary measures. No cost analysis is this simple and is usually based on many factors, but most analyses do not have equal cost for a false positive versus a false negative.

The code is the same for the optimal cutoff for the multiple prediction data:

print(opt.cut(many.roc.perf, manypred))
                  [,1]       [,2]       [,3]       [,4]       [,5]
sensitivity  0.8076923  0.8205128  0.7692308  0.8205128  0.7564103
specificity  0.7902622  0.7827715  0.8501873  0.8164794  0.8464419
cutoff      -0.5749773 -0.5640632 -0.4311301 -0.5336958 -0.4863360
                  [,6]       [,7]       [,8]       [,9]      [,10]
sensitivity  0.7820513  0.7948718  0.7820513  0.7435897  0.7435897
specificity  0.8089888  0.8314607  0.8089888  0.8352060  0.8501873
cutoff      -0.5364402 -0.4816705 -0.5388664 -0.4777073 -0.4714354

Accuracy

Simple example: one set of prediction and labels

Another cost measure that is popular is overall accuracy. This measure optimizes the correct results, but may be skewed if there are many more negatives than positives, or vice versa. Let's get the overall accuracy for the simple predictions and plot it:

acc.perf = performance(pred, measure = "acc")
plot(acc.perf)

plot of chunk accc_pred

What if we actually want to extract the maximum accuracy and the cutoff corresponding to that? In the performance object, we have the slot x.values, which corresponds to the cutoff in this case, and y.values, which corresponds to the accuracy of each cutoff. We'll grab the index for maximum accuracy and then grab the corresponding cutoff:

ind = which.max( slot(acc.perf, "y.values")[[1]] )
acc = slot(acc.perf, "y.values")[[1]][ind]
cutoff = slot(acc.perf, "x.values")[[1]][ind]
print(c(accuracy= acc, cutoff = cutoff))
 accuracy    cutoff 
0.8500000 0.5014893 

Hooray! Then you can go forth and threshold your model using the cutoff for (in hopes) maximum accuracy in your test data.

Example: multiple sets of prediction and labels

Again, we will do the same with many predictions and labels, but must iterate over the results (using a mapply statement):

many.acc.perf = performance(manypred, measure = "acc")
sapply(manypred@labels, function(x) mean(x == 1))
 [1] 0.226087 0.226087 0.226087 0.226087 0.226087 0.226087 0.226087
 [8] 0.226087 0.226087 0.226087
mapply(function(x, y){
  ind = which.max( y )
  acc = y[ind]
  cutoff = x[ind]
  return(c(accuracy= acc, cutoff = cutoff))
}, slot(many.acc.perf, "x.values"), slot(many.acc.perf, "y.values"))
               [,1]         [,2]      [,3]       [,4]      [,5]       [,6]
accuracy 0.86376812  0.881159420 0.8666667  0.8724638 0.8724638  0.8753623
cutoff   0.02461465 -0.006091327 0.2303707 -0.1758013 0.1251976 -0.2153779
               [,7]      [,8]      [,9]      [,10]
accuracy  0.8753623 0.8724638 0.8637681 0.86376812
cutoff   -0.2066697 0.1506282 0.2880392 0.06536471

We see that these cutoffs are not the same as those using the opt.cut from above. This is due to the the fact that the proportion of positive cases is much less than 50%.

Area under the curve (AUC) and partial AUC (pAUC)

Simple example: one set of prediction and labels

The area under curve summarizes the ROC curve just by taking the area between the curve and the x-axis. the Let's get the area under the curve for the simple predictions:

auc.perf = performance(pred, measure = "auc")
auc.perf@y.values
[[1]]
[1] 0.8341875

As you can see, the result is a scalar number, the area under the curve (AUC). This number ranges from \(0\) to \(1\) with \(1\) indicating 100% specificity and 100% sensitivity.

As before, if you only want to accept a fixed FPR, we can calculate a partial AUC, using the fpr.stop argument:

pauc.perf = performance(pred, measure = "auc", fpr.stop=0.1)
pauc.perf@y.values
[[1]]
[1] 0.02780625

Now, we see the pAUC to be much lower. It is of note that this value can range from \(0\) to whatever fpr.stop is. In order to standardize it to \(1\), you can divide it by fpr.stop to give a \([0, 1]\) measure:

pauc.perf@y.values = lapply(pauc.perf@y.values, function(x) x / 0.1)
pauc.perf@y.values
[[1]]
[1] 0.2780625

Although this measure is more comparable to the full AUC measure, it is still low. Note, there is no “one” cutoff for AUC or pAUC, as it measures the performance over all cutoffs. Also, plotting functions for scalar outcome measures (such as AUC) do not work for performance objects. The code for the multiple predictions is the same.

manypauc.perf = performance(manypred, measure = "auc", fpr.stop=0.1)
manypauc.perf@y.values = lapply(manypauc.perf@y.values, function(x) x / 0.1)
manypauc.perf@y.values
[[1]]
[1] 0.500048

[[2]]
[1] 0.5692404

[[3]]
[1] 0.5182944

[[4]]
[1] 0.5622299

[[5]]
[1] 0.5379814

[[6]]
[1] 0.5408624

[[7]]
[1] 0.5509939

[[8]]
[1] 0.5334678

[[9]]
[1] 0.4979353

[[10]]
[1] 0.4870354

Note, use sapply instead of lapply if you want the result to be a vector.

Conclusion

For ROC analysis the ROCR package has good methods and many built in measures. Other packages, such as the pROC package, can be useful for many functions and analyses, especially testing the difference between ROC and pROC curves. In some ways, you may want to use proc admissibly over ROCR, especially because (when I checked Dec 18, 2014) the ROCR package was orphaned. But if you are working in ROCR, I hope this give you some examples of how to fit the objects and extract the results.

My Commonly Done ggplot2 graphs: Part 2

In my last post I described some of my commonly done ggplot2 graphs. It seems as though some people are interested in these, so I was going to follow this up with other plots I make frequently.

Scatterplot colored by continuous variable

The setup of the data for the scatterplots will be the same as the previous post, one x variable and one y variable.

library(ggplot2)
set.seed(20141106)
data = data.frame(x = rnorm(1000, mean=6), 
                  batch = factor(rbinom(1000, size=4, prob = 0.5)))
data$group1 = 1-rbeta(1000, 10, 2)
mat = model.matrix(~ batch, data=data)
mat = mat[, !colnames(mat) %in% &quot;(Intercept)&quot;]
betas = rbinom(ncol(mat), size=20, prob = 0.5)
data$quality = rowSums(t(t(mat) * sample(-2:2)))
data$dec.quality = cut(data$quality, 
                       breaks = unique(quantile(data$quality, 
                                         probs = seq(0, 1, by=0.1))),
                       include.lowest = TRUE)

batch.effect = t(t(mat) * betas)
batch.effect = rowSums(batch.effect)
data$y = data$x * 5 + rnorm(1000) + batch.effect  + 
  data$quality * rnorm(1000, sd = 2)

data$group2 = runif(1000)

I have added 2 important new variables, quality and batch. The motivation for these variables is akin to an RNAseq analysis set where you have a quality measure like read depth, and where the data were processed in different batches. The y variable is based both on the batch effect and the quality.

We construct the ggplot2 object for plotting x against y as follows:

g = ggplot(data, aes(x = x, y=y)) + geom_point()
print(g)

plot of chunk g_create

Coloring by a 3rd Variable (Discrete)

Let's plot the x and y data by the different batches:

print({ g + aes(colour=batch)})

plot of chunk gbatch

We can see from this example how to color by another third discrete variable. In this example, we see that the relationship is different by each batch (each are shifted).

Coloring by a 3rd Variable (Continuous)

Let's color by quality, which is continuous:

print({ gcol = g + aes(colour=quality)})

plot of chunk quality

The default option is to use black as a low value and blue to be a high value. I don't always want this option, as I cannot always see differences clearly. Let's change the gradient of low to high values using scale_colour_gradient:

print({ gcol + scale_colour_gradient(low = &quot;red&quot;, high=&quot;blue&quot;) })

plot of chunk qual_col

This isn't much better. Let's call the middle quality gray and see if we can see better separation:

print({ gcol_grad = gcol + 
          scale_colour_gradient2(low = &quot;red&quot;, mid = &quot;gray&quot;, high=&quot;blue&quot;) })

plot of chunk qual_col_mid

Scatterplot with Coloring by a 3rd Variable (Continuous broken into Discrete)

Another option is to break the quality into deciles (before plotting) and then coloring by these as a discrete variable:

print({ gcol_dec = g + aes(colour=dec.quality) })

plot of chunk decqual

Scatterplot with Coloring by 3rd Continuous Variable Faceted by a 4th Discrete Variable

We can combine these to show each batch in different facets and coloring by quality:

print({ gcol_grad + facet_wrap(~ batch )})

plot of chunk decqual_batch

We can compound all these operations by passing transformations to scale_colour_gradient such as scale_colour_gradient(trans = "sqrt"). But enough with scatterplots.

Distributions! Lots of them.

One of the gaping holes in my last post was that I did not do any plots of distributions/densities of data. I ran the same code from the last post to get the longitudinal data set named dat.

Histograms

Let's say I want a distribution of my yij variable for each person across times:

library(plyr)
g = ggplot(data=dat, aes(x=yij, fill=factor(id))) +   guides(fill=FALSE)
ghist = g+ geom_histogram(binwidth = 3) 
print(ghist)

plot of chunk unnamed-chunk-1

Hmm, that's not too informative. By default, the histograms stack on top of each other. We can change this by setting position to be identity:

ghist = g+ geom_histogram(binwidth = 3, position =&quot;identity&quot;) 
print(ghist)

plot of chunk ghist_ident

There are still too many histograms. Let's plot a subset.

Aside: Using the %+% operator

The %+% operator allows you to reset what dataset is in the ggplot2 object. The data must have the same components (e.g. variable names); I think this is most useful for plotting subsets of data.

nobs = 10
npick = 5

Let's plot the density of \(5\) people people with \(10\) or more observations both using geom_density and geom_line(stat = "density"). We will also change the binwidth:

tab = table(dat$id)
ids = names(tab)[tab &gt;= nobs]
ids = sample(ids, npick)
sub = dat[ dat$id %in% ids, ]
ghist = g+ geom_histogram(binwidth = 5, position =&quot;identity&quot;) 
ghist %+% sub

plot of chunk sub

Overlaid Histograms with Alpha Blending

Let's alpha blend these histograms to see the differences:

ggroup = ggplot(data=sub, aes(x=yij, fill=factor(id))) + guides(fill=FALSE)
grouphist = ggroup+ geom_histogram(binwidth = 5, position =&quot;identity&quot;, alpha = 0.33) 
grouphist

plot of chunk unnamed-chunk-2

Similarly, we can plot over the 3 groups in our data:

ggroup = ggplot(data=dat, aes(x=yij, fill=factor(group))) + guides(fill=FALSE)
grouphist = ggroup+ geom_histogram(binwidth = 5, position =&quot;identity&quot;, alpha = 0.33) 
grouphist

plot of chunk unnamed-chunk-3

These histograms are something I commonly do when I want overlay the data in some way. More commonly though, espeically with MANY distributions, I plot densities.

Densities

We can again plot the distribution of \(y_{ij}\) for each person by using kernel density estimates, filled a different color for each person:

g = ggplot(data=dat, aes(x=yij, fill=factor(id))) +   guides(fill=FALSE)
print({gdens = g+ geom_density() })

plot of chunk gdens

As the filling overlaps a lot and blocks out other densities, we can use just different colors per person/id/group:

g = ggplot(data=dat, aes(x=yij, colour=factor(id))) +   guides(colour=FALSE)
print({gdens = g+ geom_density() })

plot of chunk colourdens

I'm not a fan that the default for stat_density is that the geom = "area". This creates a line on the x-axis that closes the object. This is very important if you want to fill the density with different colors. Most times though, I want simply the line of the density with no bottom line. We can achieve this with:

print({gdens2 = g+ geom_line(stat = &quot;density&quot;)})

plot of chunk gdens2

What if we set the option to fill the lines now? Well lines don't take fill, so it will not colour each line differently.

gdens3 = ggplot(data=dat, aes(x=yij, fill=factor(id))) + geom_line(stat = &quot;density&quot;) +  guides(colour=FALSE)
print({gdens3})

plot of chunk gdens3

Now, regardless of the coloring, you can't really see the difference in people since there are so many. What if we want to do the plot with a subset of the data and the object is already constructed? Again, use the %+% operator.

Overlaid Densities with Alpha Blending

Let's take just different subsets of groups, not people, and plot the densities, with alpha blending:

print({ group_dens = ggroup+ geom_density(alpha = 0.3) })

plot of chunk group_dens

That looks much better than the histogram example for groups. Now let's show these with lines:

print({group_dens2 = ggroup+ geom_line(stat = &quot;density&quot;)})

plot of chunk group_dens2

What happened? Again, lines don't take fill, they take colour:

print({group_dens2 = ggroup+ geom_line(aes(colour=group), stat = &quot;density&quot;)})

plot of chunk group_dens3

I'm a firm believer of legends begin IN plots, so let's push that in there and make it blend in:

print({
  group_dens3 =
  group_dens2 +  theme(legend.position = c(.75, .75),
        legend.background = element_rect(fill=&quot;transparent&quot;),
        legend.key = element_rect(fill=&quot;transparent&quot;, 
                                  color=&quot;transparent&quot;))
})

plot of chunk group_dens_legend

Lastly, I'll create a dataset of the means of the datasets and put vertical lines for the mean:

gmeans = ddply(dat, .(group), summarise,
              mean = mean(yij))
group_dens3 + geom_vline(data=gmeans, 
                         aes(colour = group, xintercept = mean))

plot of chunk group_mean

Conclusion

Overall, this post should give you a few ways to play around with densities and such for plotting. All the same changes as the previous examples with scatterplots, such as facetting, can be used with these distribution plots. Many times, you want to look at the data in very different ways. Histograms can allow you to see outliers in some ways that densities do not because they smooth over the data. Either way, the mixture of alpha blending, coloring, and filling (though less useful for many distributions) can give you a nice description of what's going on a distributional level in your data.

PS: Boxplots

You can also do boxplots for each group, but these tend to separate well and colour relatively well using defaults, so I wil not discuss them here. My only note is that you can (and should) overlay points on the boxplot rather than just plot the histogram. You may need to jitter the points, alpha blend them, subsample the number of points, or clean it up a bit, but I try to display more DATA whenever possible.