This week we will focus on hierarchical models as they are used in genomics, and particularly hierarchical models for estimates of variance. A good review of this topic is:

Analyzing ’omics data using hierarchical models by Hongkai Ji and X Shirley Liu

Hierarchical models have proven very useful in genomics, when we often have precious few biological replicates to assess within-group variance. Often experiments themselves, as well as the technical costs of sequencing DNA, mean that only 3-5 samples might be generated for each biological condition, although there may be many (e.g. dozens) of conditions. Ideally, more samples would be generated for definitive estimation of differences relative to biological variability in each condition. This all depends greatly on what the treatments are doing, and the relationship of the replicates to each other – are they mice in a controlled setting with the same genetic background, or human donors in a clinic, etc.?

The point of the hierarchical model is summarized nicely in Figure 1 from the paper linked above. The key idea is that, we only observe a few samples and therefore, we can potentially have very bad estimates of the variance for some genes, either because the samples were observe happened to be close to each other (under-estimation) or too spread apart (over-estimation), and do not represent the biological variability we would see if we had generated more replicates. Hierarchical models, and an empirical Bayes procedure to adjust the smallest and largest estimates of variance towards the middle of the distribution of sample variances, helps to avoid large mistakes in inference when the estimated variances are used to generate test statistics. This is sometimes referred to as moderation of estimates, and the estimates themselves are sometimes referred to as shrinkage estimators.

One of the most popular methods which made use of the hierarchical model for variance estimation is limma, which was originally designed for microarray (continuous valued), but has been extended recently to work with sequencing data (counts). Other methods for count data, such as DESeq2 and edgeR, likewise adopt a shrinkage based method similar to the one shown here, but for the dispersion parameter of a count distribution. In this document, I will show the practical effect of running the eBayes (empirical Bayes) function in limma, and how this modifies the variance estimates.

I start by loading the curatedBladderData package, which has a number of ExpressionSet objects with gene expression from patients with bladder cancer. We load one of the dataset, and examine the experiment data.

library(curatedBladderData)
library(affy)
data(GSE13507_eset)
e <- GSE13507_eset
experimentData(e)@name
## [1] "Kim WJ, Kim EJ, Kim SK, Kim YJ et al. Predictive value of progression-related gene classifier in primary non-muscle invasive bladder cancer. Mol Cancer 2010 Jan 8;9:3."

We will subset to a set of samples with the same staging. Our goal will be to estimate the variance of each gene using a subset of samples that is typical of a small experiment (n=5 vs 5), and compare to the variance estimate we get using all of the samples.

table(e$summarystage)
## 
##    invasive superficial 
##          62         103
e <- e[,which(e$summarystage == "superficial")]
dim(e)
## Features  Samples 
##    19329      103
boxplot(exprs(e), range=0, main="samples")

boxplot(t(exprs(e)[1:10,]), range=0, main="genes")

A plot of the variance over the mean for each gene. For this demonstration, we will remove the genes with low mean, where the variances approach 0.

library(matrixStats)
rm <- rowMeans(exprs(e))
rv <- rowVars(exprs(e))
plot(rm, sqrt(rv), cex=.5, col=rgb(0,0,0,.4))

A histogram of the variances for each gene:

e <- e[rm > 8,]
rv <- rowVars(exprs(e))
hist(sqrt(rv),breaks=50,col="grey")

Finally, we perform a quick check to see that there are not large clusters in the data.

hc <- hclust(dist(t(exprs(e)[order(rv,decreasing=TRUE)[1:1000],]))) 
plot(hc)

There seems to be a single outlier, which we remove.

e <- e[,-which(colnames(e)=="GSM340606")]

Let’s recalculate the sample variance of each row, for use later:

rv <- rowVars(exprs(e))

We take a random sub-sample of just 10 arrays to compare with the full sample set. Plotting the sample variance of the subset against the full sample set shows some correspondence, but many above and below the line.

n <- 10
set.seed(1)
sample.idx <- sample(ncol(e), n)
e.sub <- e[,sample.idx]
rv.sub <- rowVars(exprs(e.sub))
plot(sqrt(rv), sqrt(rv.sub), cex=.5, col=rgb(0,0,0,.5),
     xlab="sample SD full data", ylab="sample SD subset")
abline(0,1)

Empirical Bayes shrinkage estimator

The limma package has a function eBayes which takes in sample variance estimates, calculates the parameters of a prior distribution by fitting it to the observed data, and then produces posterior estimates for the sample variance. Plotting the posterior estimates against the standard sample variances, you can see they have shifted toward a central value, which is s2.prior.

library(limma)
design <- model.matrix(~1, data.frame(row.names=1:n))
fit <- lmFit(exprs(e.sub), design)
fit <- eBayes(fit)
plot(rv.sub, fit$s2.post, xlim=c(0,1), ylim=c(0,1),
     xlab="sample var", ylab="eBayes var")
abline(0,1)
abline(v=fit$s2.prior, h=fit$s2.prior)

Another way to visualize the posterior estimates is to plot the original estimates on the left side, and the new estimates on the right side, with segments connecting them. The blue line indicates the middle of the prior distribution. Compare the following plot with the second figure in this paper by Bradley Efron and Carl Morris on shrinkage estimators.

library(rafalib)
nullplot(0,1,0,2,xaxt="n")
axis(1, c(0,1), c("before","after"))
n <- 100
idx <- sample(nrow(e),n)
segments(rep(0,n), rv.sub[idx], rep(1,n), fit$s2.post[idx], col=rgb(0,0,0,.4))
abline(h=fit$s2.prior, col="dodgerblue", lwd=5)

Repeating the above plot by seeing that the highest estimates of sample variance are “shrunk” significantly toward the middle.

nullplot(0,1,0,12,xaxt="n")
axis(1, c(0,1), c("before","after"))
n <- 100
idx <- order(rv.sub, decreasing=TRUE)[1:n]
segments(rep(0,n), rv.sub[idx], rep(1,n), fit$s2.post[idx], col=rgb(0,0,0,.4))
abline(h=fit$s2.prior, col="dodgerblue", lwd=5)

We can show that we have reduced both the root mean squared error, as well as the median absolute error with our new estimators, compared to “true”: the sample variance using all the samples. We also compare to just using a common SD estimate for all genes (the location of the prior).

true <- sqrt(rv)
samp.sd <- sqrt(rv.sub)
ebayes.sd <- sqrt(fit$s2.post)
sqrt(mean((samp.sd - true)^2)) #   sample SD - RMSE
## [1] 0.1832464
sqrt(mean((ebayes.sd - true)^2)) # EBayes SD - RMSE
## [1] 0.1456775
sqrt(mean((fit$s2.prior - true)^2)) #  prior - RMSE
## [1] 0.3710238
median(abs(samp.sd - true)) #   sample SD - MAD
## [1] 0.1109148
median(abs(ebayes.sd - true)) # EBayes SD - MAD
## [1] 0.09683256
median(abs(fit$s2.prior - true)) #  prior - MAD
## [1] 0.204267

It helps to zoom into individual genes, to see how we have reduced the large errors comparing our estimate on a small sub-sample to the full dataset. Two things to note: (i) when the estimates are low, they tend to be too low, and when the estimates are high, they tend to be too high. This phenomenon is sometimes referred to as “winner’s curse”. (ii) most of the posterior variance estimates are close to 0.5, so we have reduced the number of genes on the left side, where the variance estimate is very low, and underestimated. These genes would lead to false positives, by inflating the t statistic. Similarly, using the posterior, there are fewer genes with large overestimates of the variance.

par(mfrow=c(1,2),mar=c(4,2,2,1))
plot(samp.sd, samp.sd - true, col=rgb(0,0,0,.4), xlim=c(0,4), ylim=c(-1.5,1.5),
     main="'error' over estimate")
abline(h=-1:1,v=.5,lty=2,col=rgb(1,0,0,.5))
plot(ebayes.sd, ebayes.sd - true, col=rgb(0,0,0,.4), xlim=c(0,4), ylim=c(-1.5,1.5),
     main="'error' over estimate")
abline(h=-1:1,v=.5,lty=2,col=rgb(1,0,0,.5))

One final plot to show the change is to draw arrows when the two estimates disagree, over the “true” sample variance from the full dataset. Note that, the posterior estimates are not perfect, and in some cases we have moved the estimates in the wrong direction. However, overall, we have improved more estimates than we have made worse.

plot(true, samp.sd, type="n", ylab="sample SD --> eBayes SD")
idx <- abs(samp.sd - ebayes.sd) > .1
arrows(true[idx], samp.sd[idx],
       true[idx], ebayes.sd[idx],
       col=rgb(0,0,0,0.3), length=.1)
abline(0,1)

sessionInfo()
## R version 4.0.4 (2021-02-15)
## Platform: x86_64-apple-darwin17.0 (64-bit)
## Running under: macOS Big Sur 10.16
## 
## Matrix products: default
## LAPACK: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRlapack.dylib
## 
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
## 
## attached base packages:
## [1] parallel  stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] rafalib_1.0.0             limma_3.46.0              matrixStats_0.58.0       
##  [4] curatedBladderData_1.26.0 affy_1.68.0               Biobase_2.50.0           
##  [7] BiocGenerics_0.36.0       testthat_3.0.2            rmarkdown_2.7            
## [10] devtools_2.3.2            usethis_2.0.1            
## 
## loaded via a namespace (and not attached):
##  [1] highr_0.8             jquerylib_0.1.3       bslib_0.2.4           RColorBrewer_1.1-2   
##  [5] compiler_4.0.4        BiocManager_1.30.10   zlibbioc_1.36.0       prettyunits_1.1.1    
##  [9] remotes_2.2.0         tools_4.0.4           digest_0.6.27         pkgbuild_1.2.0       
## [13] pkgload_1.2.0         jsonlite_1.7.2        preprocessCore_1.52.1 memoise_2.0.0        
## [17] evaluate_0.14         lifecycle_1.0.0       rlang_0.4.10          cli_2.3.1            
## [21] yaml_2.2.1            xfun_0.22             fastmap_1.1.0         stringr_1.4.0        
## [25] withr_2.4.1           knitr_1.31            sass_0.3.1            desc_1.3.0           
## [29] fs_1.5.0              rprojroot_2.0.2       glue_1.4.2            R6_2.5.0             
## [33] processx_3.4.5        sessioninfo_1.1.1     callr_3.5.1           purrr_0.3.4          
## [37] magrittr_2.0.1        codetools_0.2-18      ps_1.6.0              ellipsis_0.3.1       
## [41] htmltools_0.5.1.1     assertthat_0.2.1      stringi_1.5.3         cachem_1.0.4         
## [45] crayon_1.4.1          affyio_1.60.0