Introduction

Classic inferential methods based on means (e.g., the ANOVA F test) assume normality and homoscedasticity (equal variances). A fundamental issue is whether violating these two assumptions is a serious practical concern. Based on numerous articles summarized in Wilcox (2017), the answer is an unequivocal “yes”. Under general conditions they can have relatively poor power, they can yield inaccurate confidence intervals, and they can poorly characterize the extent groups differ. Even a small departure from normality can be a serious concern. Despite the central limit theorem, certain serious concerns persist even when dealing with large sample sizes. Least squares regression inherits all of these concerns and new concerns are introduced.

A strategy for salvaging classic methods is to test assumptions. For example, test the hypothesis that groups have equal variances and if it fails to reject, assume homoscedasticity. However, published papers summarized in Wilcox (2017) indicate that this strategy is unsatisfactory. Roughly, such tests do not have enough power to detect situations where violating assumptions is a serious practical concern. A simple transformation of the data is another strategy that is unsatisfactory under general conditions.

The family of robust statistical methods offers an attractive framework for dealing with these issues. In some situations, robust methods make little practical difference, but they can substantially alter our understanding of data. The only known method for determining whether this is the case is to simply use a robust method and compare to the results based on a classic technique.

The R (R Core Team, 2019) package ecosystem gives the user many possibilities to apply robust methods. A general overview of corresponding implementations is given on the CRAN task view on robust statisticsFootnote 1. Here we focus on the WRS2 package, available on CRAN, that implements methods from the original WRS package (Wilcox and Schönbrodt, 2017). WRS2 is less comprehensive than WRS but implements the most important functionalities in a user-friendly manner (it uses data frames as basic input structures instead of lists, formula objects for model specification, basic S3 print/summary/plot methods, etc). Here we elaborate on basic data analysis strategies implemented in WRS2 and especially relevant for the social and behavioral sciences. The article starts with simple robust measures of location, dispersion and correlation, followed by robust group comparison strategies such as t tests, ANOVA, between-within subject designs, and quantile comparisons. Subsequently, we present robust ANCOVA and robust mediation strategies.

Note that in the text we only give a minimum of technical details, necessary to have a basic understanding of the respective method. An excellent introduction to robust methods within a psychology context is given in Field and Wilcox (2017), and more comprehensive treatments are given in Wilcox (2017). In order to follow this tutorial, a basic level of R proficiency is needed (e.g., as presented in Field, Miles, & Field 2012).

Robust measures of location, scale, and dependence

Robust location measures

A robust alternative to the arithmetic mean \(\bar x\) is the class of trimmed means, which contains the sample median as a special case. A trimmed mean discards a certain percentage at both ends of the distribution. For instance, a 10% trimmed mean cuts off 10% at the lower end and 10% the higher end of the distribution. Let x1,…x10 be n = 10 sample values, sorted in ascending order. The 10% trimmed sample mean is

$$ \bar x_{t} = (x_{2} + x_{3} + {\cdots} + x_{8} + x_{9})/8. $$
(1)

That is, it excludes the lowest and the largest value and computes the arithmetic mean on the remaining values. The sample size h after trimming is called effective sample size (here, h = 8). Note that if the trimming portion is set to γ = 0.5, the trimmed mean \(\bar x_{t}\) results in the median \(\tilde x\). An appeal of a 20% trimmed mean is that it achieves nearly the same amount of power as the mean when sampling from a normal distribution, and when there are outliers, a 20% trimmed mean can have a substantially smaller standard error.

In R, a trimmed mean can be computed via the basic mean function by setting the trim argument accordingly. Let us illustrate its computation using a simple data vector taken from a self-awareness and self-evaluation study by Dana (1990). The variable reflects the time (in s) persons could keep a portion of an apparatus in contact with a specified target. Note that this variable is skewed, which is the standard for duration data. The 10% trimmed mean including the standard error (see Appendix for details) can be computed as follows. For comparison, we also report the standard arithmetic mean and its standard error.


> library("WRS2") > timevec <- c(77, 87, 88, 114, 151, 210, + 219, 246, 253, 262, 296, 299, + 306, 376, 428, 515, 666, 1310, 2611) > mean(timevec, 0.1) ## [1] 342.7059 > trimse(timevec, 0.1) ## [1] 103.2686 > mean(timevec) ## [1] 448.1053 > sd(timevec)/sqrt(length(timevec)) ## [1] 136.4174

The median including standard error from WRS2 is:


> median(timevec) ## [1] 262 > msmedse(timevec) ## [1] 77.83901

Note that in the case of ties, extant methods for estimating the standard error of the sample median can be highly inaccurate. This includes the method used by msmedse. Inferential methods based on a percentile bootstrap effectively deal with this issue, as implemented in the onesampb function.

Another robust location alternative to the mean is the Winsorized mean. A 10% Winsorized mean, for example, is computed as follows. Rather than discarding the lowest 10% of the values, as done by the 10% trimmed mean, they are set equal to the smallest value not trimmed. In a similar manner, the largest 10% are set equal to the largest value not trimmed. This process is called Winsorizing, which in effect transforms the tails of the distribution. Instead of Eq. (1), the 10% Winsorized sample mean uses

$$ \bar x_{w} = (x_{2} + x_{2} + x_{3} + {\cdots} + x_{8} + x_{9} + x_{9})/10. $$
(2)

Thus, it replaces the lowest and the largest values by its neighbors and computes the arithmetic mean on this new sequence of values. Similar to the trimmed mean, the amount of Winsorizing (i.e., the Winsorizing levelγ) has to be chosen a priori. The WRS2 function to compute Winsorized mean is called winmean, whereas winvar calculates the Winsorized variance.


> winmean(timevec, 0.1) ## [1] 380.1579 > winse(timevec, 0.1) ## [1] 92.9417 > winvar(timevec, 0.1) ## [1] 129679

A general family of robust location measures are so-called M-estimators (the “M” stands for “maximum likelihood-type”), which are based on a loss function to be minimized. In the simplest case, we can consider a loss function of the form ∑ i= 1n(xiμ)2. Minimization results in a standard mean estimator μ̂ = 1 ni= 1nxi. Instead of quadratic loss, we can think of a more general, differentiable distance function ξ(⋅):

$$ \sum\limits_{i=1}^{n} \xi(x_{i} - \mu_{m}) \rightarrow \textrm{min!} $$
(3)

Let \({\Psi } = \xi ^{\prime }(\cdot )\) denote its derivative. The minimization problem reduces to \({\sum }_{i=1}^{n} {\Psi }(x_{i} - \mu _{m}) = 0\) where μm denotes the M-estimator. Several distance functions have been proposed in the literature. Huber (1981), for instance, proposed the following function:

$$ {\Psi}(x) = \left\{\begin{array}{ll} x & \quad \text{if } |x|\leq K\\ K \text{sign}(x) & \quad \text{if } |x|> K \end{array}\right. $$
(4)

K is the bending constant for which Huber suggested a value of K = 1.28. Increasing K decreases sensitivity to the tails of the distribution. The estimation of M-estimators is performed iteratively (see Wilcox 2017, for details) and implemented in the mest function.


> mest(timevec) ## [1] 285.1576 > mestse(timevec) ## [1] 52.59286

Other M-estimators are the one-step estimator and the modified one-step estimator (MOM), as implemented in the functions onestep and mom. In effect, they empirically determine which values are outliers and eliminate them. One-sample tests for the median, one-step, and MOM are implemented in onesampb (using a percentile bootstrap approach). Further details on these measures including expressions for standard errors can be found in Wilcox (2017, Chapter 3).

Robust correlation coefficients

Pearson’s correlation is not robust. Outliers can mask a strong association among the bulk of the data and even a slight departure from normality can render it meaningless (Wilcox, 2017). Here we present two M-measures of correlation, meaning that they guard against the deleterious impact of outliers among the marginal distributions. The first is the percentage bend correlationρpb, a robust measure of the linear association between two random variables. When the underlying data are bivariate normal, ρpb gives essentially the same values as Pearson’s ρ. In general, ρpb is more robust to slight changes in the data than ρ. The computation, involving a bending constant β (0 ≤ β ≤ 0.5), is given in Wilcox (2017, p. 491). WRS2 provides the pbcor function to calculate the percentage bend correlation coefficient and to perform a one-sample test (H0: ρpb = 0). For simultaneous inference on a correlation matrix, pball can be used. It also includes a statistic H, which tests the global hypothesis that all percentage bend correlations in the matrix are equal to 0 in the population.

A second robust correlation measure is the Winsorized correlationρw, which requires the specification of the amount of Winsorization. The computation is simple: it uses Person’s correlation formula applied on the Winsorized data. The wincor function can be used in a similar fashion as pbcor; its extension to several random variables is called winall and illustrated here using the hangover data from Wilcox (2017, p. 452). In a study on the effect of consuming alcohol, the number hangover symptoms were measured for two independent groups, with each subject consuming alcohol and being measured on three different occasions. One group consisted of sons of alcoholics and the other one was a control group. Here we are interested in the Winsorized correlations across the three time points for the participants in the alcoholic group. The corresponding data subset needs to be organized in wide format with the test occasions in separate columns.


> library("reshape") > hangctr <- subset(hangover, + subset = group == "alcoholic") > hangwide <- cast(hangctr, id ~ time, + value = "symptoms")[,-1] > colnames(hangwide) <- paste("Time", 1:3) > winall(hangwide) ## Call: ## winall(x = hangwide) ## ## Robust correlation matrix: ## Time 1 Time 2 Time 3 ## Time 1 1.0000 0.2651 0.4875 ## Time 2 0.2651 1.0000 0.6791 ## Time 3 0.4875 0.6791 1.0000 ## ## p values: ## Time 1 Time 2 Time 3 ## Time 1 NA 0.27046 0.03935 ## Time 2 0.27046 NA 0.00284 ## Time 3 0.03935 0.00284 NA

Figure 1 shows the scatterplot matrix with the Pearson correlations in the upper triangle panels. These correlations clearly differ from the robust correlations reported above.

Fig. 1
figure 1

Scatterplot matrix for hangover data. The upper triangle panels report the Pearson correlations

In order to test for equality of two correlation coefficients, twopcor can be used for Pearson correlations and twocor for percentage bend or Winsorized correlations. As an example, using the hangover dataset, we want to test whether the time 1/time 2 correlation ρpb1 of the control group is the same as the time1/time2 correlation ρpb2 of the alcoholic group.


> ct1 <- subset(hangover, subset = (group == "control" & time == 1))$symp > ct2 <- subset(hangover, subset = (group == "control" & time == 2))$symp > at1 <- subset(hangover, subset = (group == "alcoholic" & time == 1))$symp > at2 <- subset(hangover, subset = (group == "alcoholic" & time == 2))$symp > set.seed(123) > twocor(ct1, ct2, at1, at2, corfun = "pbcor", beta = 0.15) ## Call: ## twocor(x1 = ct1, y1 = ct2, x2 = at1, y2 = at2, corfun ="pbcor", beta = 0.15) ## ## First correlation coefficient: 0.5886 ## Second correlation coefficient: 0.5628 ## Confidence interval (difference): -0.6855 0.8516 ## p value: 0.94206

Note that the confidence interval (CI) for the correlation differences is bootstrapped. Other typesof robust correlation measures are the well-known Kendall’s τ and Spearman’s ρ as implemented in the base R cor function.

Robust two-sample testing strategies

Robust tests for two independent groups and effect sizes

Yuen (1974) proposed a test statistic for a two-sample trimmed mean test, which allows for the presence of unequal variances. The test statistic is

$$ T_{y} = \frac{\bar X_{t1} - \bar X_{t2}}{\sqrt{d_{1}+d_{2}}}, $$
(5)

where dj is an estimate of the squared standard error for \(\bar X_{tj}\), which is based in part on the Winsorized data. Under the null (H0: μt1 = μt2), the test statistic follows, approximately, a t-distributionFootnote 2 with νy degrees of freedom (df). Formal expressions for the standard error in Eq. (5) and the df can be found in the Appendix. Note that if no trimming is involved, this method reduces to Welch’s classical t test with unequal variances (Welch, 1938), as implemented in t.test.

Yuen’s test is implemented in the yuen function. There is also a bootstrap version (see yuenbt), which is suggested to be used when the amount of trimming is close to zero. The example dataset, included in the WRS2 package, consists of various soccer team statistics in five different European leagues, collected at the end of the 2008/2009 season. Here we focus on the Spanish Primera División (20 teams) and the German Bundesliga (18 teams). The data are organized in an object called SpainGer in which the goals scored per game are in the (metric) variable called GoalsGame, and the variable League is a factor (nominal) indicating whether the team was from the Spanish Primera División or the German Bundesliga.

We are interested in comparing the trimmed means of goals scored per game across these two leagues. The group-wise boxplots with superimposed 1D scatterplots (points jittered) in Fig. 2 visualize potential differences in the distributions. Spain has a considerably right-skewed goal distribution involving three outliers (Barcelona, Real Madrid, Atletico Madrid). In the German league, the distribution looks fairly symmetric.

Fig. 2
figure 2

Boxplots for scored goals per game (Spanish vs. German league) with superimposed 1D jittered scatterplot

Yuen’s test based on the trimmed means with default trimming level of γ = 0.2 can be computed as follows


> yuen(GoalsGame ~ League, data = SpainGer) ## Call: ## yuen(formula = GoalsGame ~ League, ## data = SpainGer) ## ## Test statistic: 0.8394 (df = 16.17), ## p-value = 0.4135 ## ## Trimmed mean difference: -0.11494 ## 95 percent confidence interval: ## -0.405 0.1751

The result suggests that there are no significant differences in the trimmed means across the two leagues.

In terms of effect size, Algina, Keselman, and Penfield (2005) propose a robust version of Cohen’s d (Cohen, 1988).

$$ \delta_{t} = 0.642 \frac{\bar X_{t1} - \bar X_{t2}}{S^{\ast}_{w}} $$
(6)

The formal expression for Sw∗ as well as a modification for unequal variances can be found in the Appendix. In WRS2 this effect size (equal variances assumed) can be computed as follows:


> akp.effect(GoalsGame ~ League, + data = SpainGer) ## [1] -0.281395

The same rules of thumb as for Cohen’s d can be used; that is, |δt| = 0.2, 0.5, and 0.8 correspond to small, medium, and large effects. However, we would like to point out that these rules should not be used blindly. As ?[ ()p. 79]Cohen:1988 puts it, if one finds that, “what is here defined as large is too small (or too large) to meet what his area of behavioral science would consider appropriate standards is urged to make more suitable operational definitions”.

Wilcox and Tian (2011) proposed an explanatory measure of effect sizeξ which does not require equal variances and can be generalized to multiple group settings. A simple way to introduce this measure is to use the concept of explanatory power from regression with response Y and fitted values \(\hat Y\):

$$ \xi^{2} = \frac{\sigma^{2}(\hat Y)}{\sigma^{2}(Y)}, $$
(7)

where σ2(Y ) is some measure of variation associated with Y. When σ2(Y ) is taken to be the usual variance, ξ2 = ρ2, where ρ is Pearson’s correlation.

In a t test setting with equal samples sizes, σ2(Y ) can be simply estimated by the sample variance based on the 2n pooled observations, whereas \(\sigma ^{2}(\hat Y)\) is estimated with

$$ (\bar X_{1} - \bar X)^{2} + (\bar X_{2} - \bar X)^{2}, $$
(8)

where X̄ is the grand mean.Footnote 3 The explanatory measure of effect size is simply ξ = ξ2. To make this effect size measure “robust”, all that needs to be done is to replace the grand mean X̄ and group means X̄1 and X̄2 in Eq. (8) with a robust location measure (e.g., trimmed mean, Winsorized mean, median) in order to estimate σ2(Ŷ). The variance σ2(Y ) needs to be replaced by the corresponding robust variance estimator (e.g., Winsorized variance).

In WRS2, the explanatory measure of effect size can be computed as follows:


> set.seed(123) > yuen.effect.ci(GoalsGame ~ League, + data = SpainGer) ## $effsize ## [1] 0.15517 ## ## $CI ## [1] 0.0000000 0.6295249

Values of ξ̂ = 0.10, 0.30, and 0.50 correspond to small, medium, and large effect sizes. The function also gives a confidence interval (CI) for ξ̂ based on a percentile bootstrap. Varying dispersions in the response variable across the factor levels (heteroscedasticity) are allowed.

If we want to run a two-sample test on median differences or general M-estimator differences, the pb2gen function can be used.


> set.seed(123) > pb2gen(GoalsGame ~ League, data = SpainGer, est = "median") ## Call: ## pb2gen(formula = GoalsGame ~ League, data = SpainGer, est = "median") ## ## Test statistic: -0.1238, p-value = 0.41736 ## 95 ## -0.4845 0.2082 > pb2gen(GoalsGame ~ League, data = SpainGer, est = "onestep") ## Call: ## pb2gen(formula = GoalsGame ~ League, data = SpainGer, est = "onestep") ## ## Test statistic: -0.1181, p-value = 0.48748 ## 95 ## -0.3739 0.2153

These tests simply use the differences in medians (i.e., X~1X~2) and differences in Huber’s Ψ estimator from Eq. (4) (i.e., Ψ(X1) −Ψ(X2)), respectively, as test statistics. CIs and p values are determined through bootstrap. Currently, when using the median and there are tied values, this is the only known method that performs well in simulations (Wilcox, 2017).

Another function implemented in WRS2 is qcomhd for general quantile comparison across two groups (Wilcox, Erceg-Hurn, Clark, & Carlson, 2014) using the quantile estimator proposed by Harrell and Davis (1982). The null hypothesis is simply H0 : 𝜃q1 = 𝜃q2, where 𝜃q1 and 𝜃q2 are the q-th quantiles in group 1 and 2, respectively. Confidence intervals for 𝜃̂q1𝜃̂q2 and p values are determined via a percentile bootstrap. This test provides a more detailed understanding of where and how distributions differ. Let us apply this approach on the same data as above. We keep the default setting, which tests for differences in the 0.1, 0.25, 0.5, 0.75, and 0.95 quantiles. Note that the sample size is slightly small to apply this test.Footnote 4


> set.seed(123) > fitqt <- qcomhd(GoalsGame ~ League, data = SpainGer, + q = c(0.1, 0.25, 0.5, 0.75, 0.95), nboot = 500) > fitqt ## Call: ## qcomhd(formula = GoalsGame ~ League, data = SpainGer, q = c(0.1, ## 0.25, 0.5, 0.75, 0.95), nboot = 500) ## ## Parameter table: ## q n1 n2 est1 est2 est1-est.2 ci.low ci.up p.crit p.value ## 1 0.10 20 18 1.0313 0.9035 0.1278 -0.1552 0.3098 0.0100 0.268 ## 2 0.25 20 18 1.1950 1.0892 0.1058 -0.1787 0.2899 0.0500 0.464 ## 3 0.50 20 18 1.3109 1.4304 -0.1194 -0.5058 0.2690 0.0167 0.492 ## 4 0.75 20 18 1.6220 1.8078 -0.1858 -0.6089 0.4862 0.0125 0.548 ## 5 0.95 20 18 2.5160 2.2402 0.2758 -0.6043 0.8677 0.0250 0.512

If we reject when the p value is smaller than p.crit, the family-wise error (FWE) is controlled using Hochberg’s methodFootnote 5. Note that ties in the data are not problematic for this particular test. Plots that illustrate the results of quantile difference tests are implemented in the rogme package (Rousselet, Pernet, & Wilcox, 2017).

Robust tests for two dependent groups

Yuen’s trimmed mean t test in Eq. (5) can be generalized to paired sample settings as follows:

$$ T_{y} = \frac{\bar X_{t1} - \bar X_{t2}}{\sqrt{d_{1}+d_{2}+d_{12}}} $$
(9)

Under the null (H0: μt1 = μt2), Ty is t-distributed with df = h − 1, where h is the effective sample size. Details on the computation of this statistic can be found in the Appendix.

The corresponding R function is called yuend, which also reports the explanatory measure of effect size. The dataset we use for illustration is in the MASS package (Venables & Ripley, 2002) and presents data pairs involving weights of girls before and after treatment for anorexia. We use a subset of 17 girls from the family treatment (FT) condition. Figure 3 presents the individual trajectories. We keep the default trimming level (20%) and get the following test results.

Fig. 3
figure 3

Individual weight trajectories of anorexic girls before and after treatment


> library("MASS") > anorexiaFT <- subset(anorexia, + subset = Treat == "FT") > with(anorexiaFT, yuend(Prewt, Postwt)) ## Call: ## yuend(x = Prewt, y = Postwt) ## ## Test statistic: -3.829 (df = 10), ## p-value = 0.00332 ## ## Trimmed mean difference: -8.56364


## 95 percent confidence interval: ## -13.5469 -3.5804 ## ## Explanatory measure of effect size: 0.6

The output suggests that overall the treatment was successful. The explanatory measure of effect size, constructed according to the same principles as outlined above, suggests a large effect.

Quantile comparisons for paired samples (H0: 𝜃q1 = 𝜃q2) can be computed using Dqcomhd (Wilcox & Erceg-Hurn, 2012). As the independent sample version in qcomhd, it uses the quantile estimator proposed by Harrell and Davis (1982), and bootstrapping to determine the CI for 𝜃̂q1𝜃̂q2 and the p values (with the corresponding critical values controlling for the FWE).


> set.seed(123) > with(anorexiaFT, Dqcomhd(Prewt, Postwt, q = c(0.25, 0.5, 0.75))) ## Call: ## Dqcomhd(x = Prewt, y = Postwt, q = c(0.25, 0.5, 0.75)) ## ## Parameter table: ## q n1 n2 est1 est2 est1-est.2 ci.low ci.up p.crit p.value ## 1 0.25 17 17 79.9588 84.5667 -4.6079 -12.2888 1.8423 0.0500 0.25 ## 2 0.50 17 17 83.1703 92.7727 -9.6024 -11.8829 -4.5479 0.0250 0.00 ## 3 0.75 17 17 86.3380 95.8962 -9.5583 -11.8997 -6.9818 0.0167 0.00

We obtain significant weight decrease effects for the second and the third weight quartiles, but not for the first quartile.

Comparing two discrete distributions

Having two discrete variables X and Y (small sample space), sometimes it is of interest to test whether the distributions differ at each realization x and y (H0: P(X = x) = P(Y = y)). The function binband provides such an implementation allowing for both the method proposed by Storer and Kim (1990) and the one by Kulinskaya, Morgenthaler, and Staudte (2010). The test statistic is given in the Appendix.

Let us look at a simple artificial example involving responses on a five-point rating scale item across two groups of participants with group sizes n1 and n2. The binband function compares the two distributions at each possible value (here 1,2,…,5) in the joint sample space.


> g1 <- c(2, 4, 4, 2, 2, 2, 4, 3, 2, 4, 2, 3, 2, 4, 3, 2, 2, 3, 5, 5, 2, 2) > g2 <- c(5, 1, 4, 4, 2, 3, 3, 1, 1, 1, 1, 2, 2, 1, 1, 5, 3, 5) > binband(g1, g2, KMS = TRUE) ## Call: ## binband(x = g1, y = g2, KMS = TRUE) ## ## Parameter table: ## Value p1.est p2.est p1-p2 ci.low ci.up p.value p.crit ## 1 1 0.0000 0.3889 -0.3889 -0.6266 -0.1194 0.004 0.0100 ## 2 2 0.5000 0.1667 0.3333 0.0201 0.6115 0.037 0.0125 ## 3 3 0.1818 0.1667 0.0152 -0.2337 0.2565 0.930 0.0500 ## 4 4 0.2273 0.1111 0.1162 -0.1353 0.3504 0.390 0.0167 ## 5 5 0.0909 0.1667 -0.0758 -0.2969 0.1458 0.510 0.0250

The CIs are determined using the Kulinskaya–Morgenthaler–Staudte method (KMS = TRUE). The function uses Hochberg’s multiple comparison adjustment to determine critical p values with the goal of controlling the probability of one or more type I errors. The results suggest that the distributions differ significantly at (x,y) = 1 only (ppcrit).

One-way robust testing strategies

Often it is said that F tests are quite robust against normality violations. As Field and Wilcox (2017, p. 37) recommend, such statements should be banned because based on many papers published during the past 50 years, it is well established that this statement is not correct (especially when dealing with heavy-tailed distributions, unequal sample sizes, and distributions differing in skewness). In this section, we present various robust one-way ANOVA strategies, followed by higher-order models in the next section.

One-way trimmed means comparisons

The first robust ANOVA alternative presented here is a one-way comparison of J trimmed group means (H0 : μt1 = μt2 = ⋯ = μtJ), allowing for heteroscedasticity. Technical details on this F-distributed Welch-type test statistic (Welch, 1951) can be found in the Appendix.

In WRS2, this approach is implemented via the t1way function, here applied to the weight differences in the anorexia data from above (post-treatment weight minus pre-treatment weight, resulting in metric variable Wdiff). There are two different types of treatment in the data (family treatment FT and cognitive behavioral treatment CBT) as well as one control group, specified in the factor Treat. Figure 4 shows the corresponding boxplots with superimposed 1D scatterplots.

Fig. 4
figure 4

Boxplots with superimposed jittered 1D scatterplots for weight differences across control and two treatment conditions

The robust one-way ANOVA based on trimmed means (20% trimming level) can be computed as follows:


> anorexia$Wdiff <- anorexia$Postwt - anorexia\InEq{$Prewt > t1way(Wdiff ~ Treat, data = anorexia) ## Call: ## t1way(formula = Wdiff ~ Treat, data = anorexia) ## ## Test statistic: F = 5.6286 ## Degrees of freedom 1: 2 ## Degrees of freedom 2: 24.89 ## p-value: 0.00962 ## ## Explanatory measure of effect size: 0.53

There is a significant overall effect in weight differences across the treatments. The explanatory measure of effect size ξ follows the same logic as outlined in Eq. (7). The difference compared to the two-sample version is that Eq. (8) generalizes to

$$ \sigma^{2}(\hat Y) = \frac{1}{J-1}\sum\limits_{j=1}^{J} (\bar Y_{j} - \bar Y)^{2}. $$
(10)

The same rules of thumb apply as in the two-sample case. In this example, we obtain a large effect.

Post hoc tests on trimmed means use the linear contrast expression

$$ \hat{\Psi} = \sum\limits_{j=1}^{J} c_{j} \bar X_{tj}. $$
(11)

In WRS2, the constants are specified in a way such that all pairwise post hoc tests are carried out. For instance, for comparing the first two trimmed means c1 = 1 and c2 = − 1, whereas the remaining c’s are 0.


> lincon(Wdiff ~ Treat, data = anorexia) ## Call: ## lincon(formula = Wdiff ~ Treat, data = anorexia) ## ## psihat ci.lower ci.upper p.value ## CBT vs. Cont 2.96250 -3.03709 8.96209 0.22201 ## CBT vs. FT -6.10909 -12.33490 0.11672 0.01943 ## Cont vs. FT -9.07159 -16.08255 -2.06064 0.00293

The function reports the Ψ̂ value according to Eq. (11) denoting pairwise trimmed mean differences. The 95% CIs and the p values are adjusted for multiple testing in the sense that the simultaneous probability coverage of the CIs is 1 − α and the family-wise error rate is α. Details on this procedure can be found in Wilcox (1986). A bootstrap version of t1way is implemented in t1waybt with corresponding bootstrap post hocs in mcppb20

Note that in order to perform linear contrasts, there is no need to first obtain a significant omnibus ANOVA. In many experimental situations, researchers have specific predictions about certain contrasts, which can be directly tested (i.e., without computing an omnibus test first).

One-way quantile comparisons

In this section, we focus on testing H0 : 𝜃1 = … = 𝜃J, where the 𝜃’s represent a particular quantile in group j. Let us start with testing for equality of medians across J groups. The test statistic FM, given in the Appendix, follows the same concept as the one for trimmed means above; the only difference is that it uses an alternative estimate for the standard error. Using our anorexia dataset, it can be computed as follows:


> set.seed(123) > med1way(Wdiff ~ Treat, data = anorexia) ## Call: ## med1way(formula = Wdiff ~ Treat, + data = anorexia) ## ## Test statistic F: 4.5708 ## Critical value: 2.8398 ## p-value: 0.008

A few remarks regarding this test statistic. First, it has been found that by evaluating the test statistic using the df as quoted in the Appendix (i.e., ν1 = J − 1 and ν2 = ) can result in the actual level being less than the nominal level, (i.e., around 0.02-0.025 when testing at the 0.05 level and n is small). A better strategy, as provided by this implementation, is to simulate the critical value and compute the p value accordingly. In order to make the result reproducible, above we set a seed.

Second, if there are too many ties in the data, the standard error becomes inaccurate. In such situations, the Qanova function provides a good alternative, which allows for general quantile testing across J groups, not only the median. Similar to qcomhd, the quantile ANOVA implemented in Qanova uses the Harrell–Davis estimator for the quantiles. It tests the global hypothesis:

$$ H_{0}: \quad \theta_{q1}-\theta_{q2} = \theta_{q2}-\theta_{q3} = {\ldots} = \theta_{q(J-1)}-\theta_{qJ}. $$

The p value is determined using a bootstrap (see Wilcox, 2017, p. 378–379 for details). In case multiple quantiles are tested at the same time, the p values are corrected using Hochberg’s method.


> set.seed(123) > fitqa <- Qanova(Wdiff ~ Treat, + data = anorexia, + q = c(0.25, 0.5, 0.75)) > fitqa ## Call: ## Qanova(formula = Wdiff ~ Treat, ## data = anorexia, ## q = c(0.25, 0.5, 0.75)) ## ## p.value p.adj ## q = 0.25 0.0100 0.0200 ## q = 0.5 0.0050 0.0150 ## q = 0.75 0.0217 0.0217

It reports the unadjusted and adjusted p values, to be compared to the α-level. We find significant overall differences at each of the quartiles.

Robust two-way and three-way comparisons

This section elaborates on higher-order ANOVA designs including post hoc tests. Note that all WRS2 robust ANOVA functions allow the user to fit the full model (i.e., including all possible interactions) only. For more parsimonious models and specific post hoc contrasts, it is suggested to use the corresponding WRS functions from Wilcox and Schönbrodt (2017).

Robust two-way ANOVA strategies

Let us start with a two-way factorial ANOVA design involving J categories for the first factor, and K categories for the second factor. The test statistic for the one-way trimmed mean comparisons, as implemented in t1way, can be generalized to two-way designs; details are given in the Appendix. The hypothesis to be tested is the usual two-way ANOVA hypotheses using the trimmed means. Let μt be the grand trimmed mean (population), μtjk the mean in factor level combination jk, μtj the trimmed factor level means of the first factor, and μtk the trimmed factor level means for the second factor. Let αj = μtjμt, βk = μtkμt, and (αβ)jk = μtjkμtjμtk + μt. Using this notation, the null hypotheses are:

  • First factor: H0 : ∑ j= 1Jαj2 = 0.

  • Second factor: H0 : ∑ k= 1Kβk2 = 0.

  • Interaction: H0 : ∑ j= 1Jk= 1K(αβ)jk2 = 0.

Such a robust two-way ANOVA can be carried out using the function t2way. To illustrate, we use the beer goggles dataset by Field et al., (2012) who studied the effects of alcohol on mate selection in night clubs. The hypothesis is that after alcohol had been consumed, subjective perceptions of physical attractiveness would become more inaccurate (beer goggles effect). In this study, we have the factors gender (24 male and 24 female students) and the amount of alcohol consumed (none, 2 pints, 4 pints). At the end of the evening, the researcher took a photograph of the person the participant was chatting up. The attractiveness of the person on the photo was then evaluated by independent judges on a scale from 0 to 100 (response variable).

Figure 5 shows the interaction plots using the trimmed mean (20% trimming level) as location measure. The two-way ANOVA on the trimmed means can be fitted as follows.

Fig. 5
figure 5

Trimmed means interaction plots for beer goggles dataset


> goggles$}alcohol <-relevel(goggles$alcohol, ref = "None") > t2way(attractiveness ~ gender*alcohol, data = goggles) ## Call: ## t2way(formula = attractiveness ~ gender * alcohol, data = goggles) ## ## value p.value ## gender 1.6667 0.209 ## alcohol 48.2845 0.001 ## gender:alcohol 26.2572 0.001

Not surprisingly, based on what we see in Fig. 5, the interaction between gender and alcohol is significant.

Post hoc tests can be applied using the mcp2atm function, which, internally calls the lincon function described above.


> postgoggle <- mcp2atm(attractiveness ~ gender*alcohol, data = goggles) > postgoggle$contrasts


## gender1 alcohol1 alcohol2 alcohol3 gender1:alcohol1 gender1:alcohol2 gender1:alcohol3 ## Female_None 1 1 1 0 1 1 0 ## Female_2 Pints 1 -1 0 1 -1 0 1 ## Female_4 Pints 1 0 -1 -1 0 -1 -1 ## Male_None -1 1 1 0 -1 -1 0 ## Male_2 Pints -1 -1 0 1 1 0 -1 ## Male_4 Pints -1 0 -1 -1 0 1 1

The second line prints the contrast matrix which illustrates what effects are actually being tested. The results are the following:


> postgoggle ## Call: ## mcp2atm(formula = attractiveness ~ gender * alcohol, data = goggles) ## ## psihat ci.lower ci.upper p-value ## gender1 10.00000 -6.00223 26.00223 0.20922 ## alcohol1 -3.33333 -20.49551 13.82885 0.61070 ## alcohol2 35.83333 19.32755 52.33911 0.00003 ## alcohol3 39.16667 22.46796 55.86537 0.00001 ## gender1:alcohol1 -3.33333 -20.49551 13.82885 0.61070 ## gender1:alcohol2 -29.16667 -45.67245 -12.66089 0.00025 ## gender1:alcohol3 -25.83333 -42.53204 -9.13463 0.00080

Let us focus on the interaction first by starting at the bottom. The last effect tells us that the difference attractiveness ratings for 4 pints vs. 2 pints differ significantly in men and women. Similarly, the second to last effect tells us that this significant gender difference also applies to 4 pints vs. none. However, males and females do not behave differently if we look at 2 pints vs. none (no significant effect; see third line from the bottom). Note that the 95% CIs and the p values are adjusted for multiple testing.

Other options for robust two-way ANOVAs are median comparisons using med2way, and general M-estimator comparisons using pbad2way. For both functions, post hoc comparisons can be computed using mcp2a (the estimator argument needs to be specified correspondingly) which uses percentile bootstrap for CIs and p values. Using the beer goggles dataset, the function calls for median and modified one-step estimators (MOM) are the following.


> set.seed(123) > med2way(attractiveness ~ gender*alcohol, + data = goggles) > mcp2a(attractiveness ~ gender*alcohol, + data = goggles, est = "median") > pbad2way(attractiveness ~ gender*alcohol, + data = goggles, est = "mom") > mcp2a(attractiveness ~ gender*alcohol, + data = goggles, est = "mom")

We omit showing the output here; the results are consistent with the trimmed mean comparisons above. Formal details on the median test are given in the Appendix; elaborations on M-estimator comparisons are given in Wilcox (2017, p. 385–388).

Robust three-way ANOVA strategies

Having three-way designs, WRS2 provides the function t3way for robust ANOVA based on trimmed means. The test statistics are determined according to the same principles as in t2way (see Appendix). Again, the critical values are adjusted such that no df of the χ2-distributed test statistics are reported (see Wilcox 2017, p. 341–346, for details).

The dataset we use to illustrate this approach is from Seligman, Nolen-Hoeksema, Thornton, and Thornton (1990). At a swimming team practice, 58 participants were asked to swim their best event as far as possible, but in each case the time reported was falsified to indicate poorer than expected performance (i.e., each swimmer was disappointed). Thirty minutes later, the athletes did the same performance again. The authors predicted that on the second trial more pessimistic swimmers would do worse than on their first trial, whereas optimistic swimmers would do better. The response is ratio = Time1/Time2. A ratio larger than 1 means that a swimmer performed better in trial 2. Figure 6 shows two separate interaction plots for male and female swimmers, using the 20% trimmed means.

Fig. 6
figure 6

Interaction plot for the trimmed means of the time ratio response for males and females separately

A three-way robust ANOVA on the trimmed means using t3way can be computed as follows:


> t3way(Ratio ~ Optim*Sex*Event, + data = swimming) ## Call: ## t3way(formula = Ratio ~ Optim*Sex*Event, ## data = swimming) ## ## value p.value ## Optim 7.1799150 0.016 ## Sex 2.2297985 0.160 ## Event 0.3599633 0.845 ## Optim:Sex 6.3298070 0.023 ## Optim:Event 1.1363057 0.595 ## Sex:Event 3.9105283 0.192 ## Optim:Sex:Event 1.2273516 0.572

The crucial effect for interpretation is the significant Optim: Sex two-way interaction. We could produce corresponding two-way interaction plots and see that, independently from the swimming style, for the females it does not matter whether someone is an optimist or a pessimist, the time ratio does not change drastically. For the males, there is a substantial difference in the time ratio for optimists and pessimists.

Repeated measurement and mixed ANOVA designs

Paired Samples/Repeated measurement designs

In this section, we consider paired samples/repeated measurement designs for more than two dependent groups/time points. The WRS2 package provides an implementation of a robust heteroscedastic repeated measurement ANOVA based on the trimmed means. The formulas for the test statistic and the df computations are given in the Appendix.

In WRS2, the function to compute a robust repeated measurements ANOVA is rmanova with corresponding post hoc tests in rmmcp. The data need to be in long format and balanced across the groups. Each of these functions takes three arguments: a vector with the responses (argument: y), a factor for the groups (e.g., time points; argument: groups), and a factor for the blocks (typically a subject ID; argument: blocks).

Once more we use the hangover dataset from above, where hangover symptoms were measured for two independent groups, with each subject consuming alcohol and being measured on three different occasions. One group consisted of sons of alcoholics and the other was a control group. A representation of the dataset is given in Fig. 7.

Fig. 7
figure 7

20% trimmed means of the number of hangover symptoms across three time points

Here we focus on a single between subjects factor only: control group. In the next section, we consider the full dataset with the corresponding between-within subjects design. After subsetting the data accordingly, a robust repeated measurement ANOVA using the rmanova function can be fitted as follows:


> hangoverC <- subset(hangover, + subset = group == "control") > with(hangoverC, rmanova(y = symptoms, + groups = time, block = id)) ## Call: ## rmanova(y = symptoms, groups = time, ## blocks = id) ## ## Test statistic: F = 2.6883 ## Degrees of freedom 1: 2 ## Degrees of freedom 2: 22 ## p-value: 0.09026

Post hoc tests (linear contrasts) can be performed as follows:


> with(hangoverC, rmmcp(y = symptoms, groups = time, block = id)) ## Call: ## rmmcp(y = symptoms, groups = time, blocks = id) ## ## psihat ci.lower ci.upper p.value p.crit sig ## 1 vs. 2 -2.66667 -7.47192 2.13858 0.14588 0.0169 FALSE ## 1 vs. 3 -1.00000 -3.17265 1.17265 0.22085 0.0250 FALSE ## 2 vs. 3 0.50000 -2.57826 3.57826 0.65583 0.0500 FALSE

The rmmcp function uses Hochberg’s approach to control for the FWE. The bootstrap version of rmanova is rmanovab with bootstrap post hocs in pairdepb.

Mixed designs

Let us extend the ANOVA setting above towards mixed designs. That is, we have within-subjects effects (e.g., due to repeated measurements) and between-subjects effects (group comparisons). The main function in WRS2 for computing a between-within subjects ANOVA on the trimmed means is bwtrim. For general M-estimators, the package offers the bootstrap based functions sppba, sppbb, and sppbi for the between-subjects effect, the within-subjects effect, and the interaction effect, respectively. Each of these functions requires the full model specification through the formula interface as well as an id argument that accounts for the within-subject structure.

We use the hangover data from above and fit a between-within subjects ANOVA on the 20% trimmed means:


> bwtrim(symptoms ~ group*time, id = id, + data = hangover) ## Call: ## bwtrim(formula = symptoms ~ group * time, ## id = id, data = hangover) ## ## value df1 df2 p.value ## group 6.6087 1 14.4847 0.0218 ## time 4.4931 2 15.4173 0.0290 ## group:time 0.5663 2 15.4173 0.5790

We get a non-significant interaction; both main effects are significant.

We can also perform post hoc comparisons on the single effects. WRS2 implements a bootstrap-based approach for one-step M estimators, modified one-step estimators (MOM), and medians. To illustrate the hypotheses being tested, we use a different dataset with a slightly more complex design (in terms of the number of factor levels). The study by McGrath (2016) looked at the effects of two forms of written corrective feedback on lexico-grammatical accuracy (errorRatio) in the academic writing of university students with English as a foreign language. It had a 3 × 4 within-by-between design with three groups (two treatment and one control; group) measured over four occasions (pre-test, treatment, post-test, delayed post-test; essay).

It helps to introduce the following notations. We have j = 1,…,J between subjects groups (in our example J = 3) and k = 1,…,K within subjects groups (e.g., time points; in our example K = 4). Let Yijk be the response of participant i, belonging to group j on measurement occasion k.

Ignoring the group levels j for the moment, Yijk can be simplified to Yik. For two occasions, k and k, we can compute the difference score \(D_{ikk^{\prime }} = Y_{ik} - Y_{ik^{\prime }}\). Let \(\theta _{kk^{\prime }}\) be some M-estimator associated with \(D_{ikk^{\prime }}\). In the special case of two measurement occasions (i.e., K = 2), we can compute a single difference. In our example with K = 4 occasions, we can compute (42) = 6 such M-estimators. The null hypothesis is:

$$ H_{0}: \theta_{1,2} = \theta_{1,3} = \theta_{1,4} = \theta_{2,3} = \theta_{2,4} = \theta_{3,4} $$

Thus, it is tested whether the “typical” difference score (as measured by an M-estimator) between any two levels of measurement occasions is 0 (while ignoring the between-subjects groups). For the essays dataset we get:


> set.seed(123) > sppbb(errorRatio ~ group*essay, id, + data = essays) ## Call: ## sppbb(formula = errorRatio~group*essay, ## id = id, data = essays) ## ## Test statistics: ## Estimate ## essay1-essay2 -0.083077 ## essay1-essay3 0.068214 ## essay1-essay4 0.003929 ## essay2-essay3 0.092500 ## essay2-essay4 -0.033333 ## essay3-essay4 -0.065769 ## ## Test whether the corresponding population ## parameters are the same: ## p-value: 0.362

The p value suggests that we cannot reject the H0 of equal difference scores.

In terms of comparisons related to the between-subjects, we can think of two principles. The first one is to perform pairwise group comparisons within each measurement occasion (K = 4). In our case, this leads to 4 ×(3 2) parameters (here, the first index relates to j and the second index to k). We can establish the following K null hypotheses:

$$ \begin{array}{@{}rcl@{}} H_{0}^{(1)}:&& \theta_{1,1} = \theta_{2,1} = \theta_{3,1}\\ H_{0}^{(2)}:&& \theta_{1,2} = \theta_{2,2} = \theta_{3,2}\\ H_{0}^{(3)}:&& \theta_{1,3} = \theta_{2,3} = \theta_{3,3}\\ H_{0}^{(4)}:&& \theta_{1,4} = \theta_{2,4} = \theta_{3,4}. \end{array} $$

We aggregate these hypotheses into a single H0, which tests whether these K null hypotheses are simultaneously true.

$$ \begin{array}{@{}rcl@{}} H_{0}:&& \theta_{1,1} - \theta_{2,1} = \theta_{1,1} - \theta_{3,1} = \theta_{2,1} - \theta_{3,1} = \\ && \theta_{1,2} - \theta_{2,2} = \theta_{1,2} - \theta_{3,2} = \theta_{2,2} - \theta_{3,2} = \\ && \theta_{1,3} - \theta_{2,3} = \theta_{1,3} - \theta_{3,3} = \theta_{2,3} - \theta_{3,3} = \\ && \theta_{1,4} - \theta_{2,4} = \theta_{1,4} - \theta_{3,4} = \theta_{2,4} - \theta_{3,4} = 0. \end{array} $$

In WRS2, this hypothesis can be tested as follows:


> set.seed(123) > sppba(errorRatio ~ group*essay, id, + data = essays, avg = FALSE) ## Call: ## sppba(formula = errorRatio ~ group*essay, ## id = id, data = essays, avg = FALSE) ## ## Test statistics: ## Estimate ## essay1 Control-Indirect 0.17664 ## essay1 Control-Direct 0.10189 ## essay1 Indirect-Direct -0.07475 ## essay2 Control-Indirect 0.23150 ## essay2 Control-Direct 0.25464 ## essay2 Indirect-Direct 0.02314 ## essay3 Control-Indirect 0.05614 ## essay3 Control-Direct 0.18000 ## essay3 Indirect-Direct 0.12386 ## essay4 Control-Indirect 0.43300 ## essay4 Control-Direct -0.11489 ## essay4 Indirect-Direct -0.54789 ## ## Test whether the corresponding population ## parameters are the same: ## p-value: 0.546

Again, we cannot reject H0.

Using this principle, many tests have to be carried out. An alternative that seems more satisfactory in terms of type I errors is to use the average across measurement occasions, that is

$$ \bar{\theta}_{j\cdot} = \frac{1}{K}\sum\limits_{k=1}^{K} \theta_{jk}. $$
(12)

Correspondingly, in our example a null hypothesis can be formulated as

$$ H_{0}: \bar{\theta}_{1\cdot} = \bar{\theta}_{2\cdot} = \bar{\theta}_{3\cdot} $$

and computed as follows by using the default avg = TRUE:


> set.seed(123) > sppba(errorRatio ~ group*essay, id, + data = essays) ## Call: ## sppba(formula = errorRatio ~ group*essay, ## id = id, data = essays) ## ## Test statistics: ## Estimate ## Control 0.2243 ## Indirect 0.1054 ## Direct -0.1189 ## ## Test whether the corresponding population ## parameters are the same: ## p-value: 0.464

Finally, let us elaborate on the sppbi function which performs tests on the interactions. In the sppbb call, six parameters were tested and we ignored the between-subjects group structure. Now we do not further ignore the group structure and compute M-estimators based on measurement occasion differences for each group separately. In the notation below, the group index is on the right-hand side of the pipe symbol, the differences in measurement occasions on the left-hand side. The null hypothesis is as follows:

$$ \begin{array}{@{}rcl@{}} H_{0}:&& \theta_{1,2|1} - \theta_{1,3|1} = \theta_{1,4|1} - \theta_{2,3|1} = \theta_{2,4|1} - \theta_{3,4|1} = \\ && \theta_{1,2|2} - \theta_{1,3|2} = \theta_{1,4|2} - \theta_{2,3|2} = \theta_{2,4|2} - \theta_{3,4|2} = \\ && \theta_{1,2|3} - \theta_{1,3|3} = \theta_{1,4|3} - \theta_{2,3|3} = \theta_{2,4|3} - \theta_{3,4|3} = 0. \end{array} $$

The WRS2 function call to test this hypothesis is:


> set.seed(123) > sppbi(errorRatio ~ group*essay, id, + data = essays) ## Call: ## sppbi(formula = errorRatio ~ group*essay, ## id = id, data = essays) ## ## Test statistics: ## Estimate ## Control essay1-essay2 -0.14667 ## Control essay1-essay3 0.12083 ## Control essay1-essay4 0.26750 ## Control essay2-essay3 -0.11778 ## Control essay2-essay4 -0.02222 ## Control essay3-essay4 0.09556 ## Indirect essay1-essay2 -0.23600 ## Indirect essay1-essay3 0.21678 ## Indirect essay1-essay4 0.45278 ## Indirect essay2-essay3 0.19293 ## Indirect essay2-essay4 -0.07889 ## Indirect essay3-essay4 -0.27182 ## Direct essay1-essay2 0.10571 ## Direct essay1-essay3 0.26905 ## Direct essay1-essay4 0.16333 ## Direct essay2-essay3 -0.20221 ## Direct essay2-essay4 0.10643 ## Direct essay3-essay4 0.30864 ## ## Test whether the corresponding population ## parameters are the same: ## p-value: 0.646

Again, we cannot reject H0.

Robust nonparametric ANCOVA

Running interval smoothers

In this section, we introduce a robust ANCOVA version which uses smoothing internally. When dealing with regression, there are situations the usual linear model appears to suffice. But it is well established that parametric regression models can be highly unsatisfactory. In general, a smoother is a function that approximates the true regression line via a technique that deals with curvature in a reasonably flexible manner. Smoothing functions typically have a smoothing parameter by means of which the user can steer the degree of smoothing. If the parameter is too small, the smoothing function might overfit the data. If the parameter is too large, we might disregard important patterns. The general strategy is to find the smallest parameter so that the plot looks reasonably smooth.

A popular regression smoother is LOWESS (locally weighted scatterplot smoothing) regression, which belongs to the family of nonparametric regression models and can be fitted using the lowess function. The smoothers presented here involve robust location measures from above and are called running interval smoothers, which work as follows.

We have pairs of observations (Xi, Yi). The strategy behind an interval smoother is to compute the γ-trimmed mean using all of the Yi values for which the corresponding Xi’s are close to a value of interest x (Wilcox, 2017). Let MAD be the median absolute deviation, that is, MAD = median|XiX~|. Let MADN = MAD/z0.75, where z0.75 represents the 0.75 quantile of the standard normal distribution. The point x is said to be close to Xi if

$$ |X_{i} - x| \leq f \times \text{MADN}. $$

Here, f as a constant called the smoothing parameter. As f increases, the neighborhood of x gets larger. Let

$$ N(X_{i}) = \{j: |X_{j}-x_{i}| \leq f \times \text{MADN}\}, $$

such that N(Xi) indexes all the Xj values that are close to x. Let 𝜃̂i be a robust location parameter of interest. A running interval smoother computes n𝜃̂i values based on the corresponding Y -value for which Xj is close to Xi. That is, the smoother defines an interval and runs across all the X-values. Within a regression context, these estimates represent the fitted values. Then we can plot the (Xi,𝜃̂i) tuples into the (Xi,Yi) scatterplot which gives us the nonparametric regression fit. The smoothness of this function depends on f.

The WRS2 package provides smoothers for trimmed means (runmean), general M-estimators (rungen), and bagging versions of general M-estimators (runmbo), recommended for small datasets.

Let us look at a data example taken from Wright and London (2009) where we have measurements for the length of a chili and its heat (scored on a scale of 0 to 11). We study various f values and various robust location measures 𝜃̂i. The top panel in Fig. 8 displays smoothers involving different robust location measures. The bottom panel shows a trimmed mean interval smoothing with varying smoothing parameter f. We see that, at least in this dataset, there are no striking differences between various smoothers (see functions runmean, rungen, and runmbo) among the various location measures. However, the choice of the smoothing parameter f affects the function heavily.

Fig. 8
figure 8

Top panel: smoothers with various robust location measures. Bottom panel: trimmed mean smoother with varying smoothing parameter f

Robust ANCOVA

ANCOVA involves a factorial design and metric covariates that were not part of the experimental manipulation. It assumes homogeneity of regression slopes across the groups when regressing the dependent variable on the covariate. In addition, normality is assumed as well as two types of homoscedasticity. Violating any of these assumptions can have a serious negative impact on the classic ANCOVA method. The robust ANCOVA function in WRS2 does not assume homoscedasticity nor homogeneity of regression slopes. In fact, it does not make any parametric assumption on the regressions at all and uses running interval smoothing (trimmed means) for each subgroup. Both nonparametric curves can be compared for subgroup differences at various points of interest along the x-continuum.

The WRS2 function ancova fits a robust ANCOVA. In its current implementation, it is limited to one factor with two categories and one covariate only. A bootstrap version of it is implemented as well (ancboot). Both functions perform the running interval smoothing on the trimmed means. Yuen’s tests on trimmed mean differences are applied at specified design points. If the design point argument (pts) is not specified, the routine automatically computes five points (for details see Wilcox 2017, p. 695). It is suggested that group sizes around the design point subject to Yuen’s test should be at least 12. Regarding the multiple testing problem, the CIs are adjusted to control the probability of at least one type I error. The p values are not adjusted.

The dataset we use to demonstrate robust ANCOVA is from Gelman and Hill (2007). It is based on data involving an educational TV show for children called “The Electric Company”. In each of four grades, the classes were randomized into treated groups and control groups. The kids in the treatment group were exposed to the TV show, and those in the control group were not. At the beginning and at the end of the school year, students in all the classes were given a reading test. The average test scores per class (pre-test and post-test) were recorded. In this analysis, we use the pretest score as the covariate and are interested in possible differences between treatment and control group with respect to the post-test scores. We are interested in comparisons at six particular design points. We set the smoothing parameters to a considerably small value.


> comppts <- c(18, 70, 80, 90, 100, 110) > fitanc <- ancova(Posttest ~ Pretest + Group, fr1 = 0.3, fr2 = 0.3, + data = electric, pts = comppts) > fitanc ## Call: ## ancova(formula = Posttest ~ Pretest + Group, data = electric, ## fr1 = 0.3, fr2 = 0.3, pts = comppts) ## ## n1 n2 diff se lower CI upper CI statistic p-value ## Pretest = 18 21 20 -11.1128 4.2694 -23.3621 1.1364 2.6029 0.0163 ## Pretest = 70 20 21 -3.2186 1.9607 -8.8236 2.3864 1.6416 0.1143 ## Pretest = 80 24 23 -2.8146 1.7505 -7.7819 2.1528 1.6079 0.1203 ## Pretest = 90 24 22 -5.0670 1.3127 -8.7722 -1.3617 3.8599 0.0006 ## Pretest = 100 28 30 -1.8444 0.9937 -4.6214 0.9325 1.8561 0.0729 ## Pretest = 110 24 22 -1.2491 0.8167 -3.5572 1.0590 1.5294 0.1380

Figure 9 shows the results of the robust ANCOVA fit. The vertical gray lines mark the design points. By taking into account the multiple testing nature of the problem, we get only one significant group difference, for a pre-test value of x = 90. For illustration, this plot also includes the linear regression fits for both subgroups (this is what a standard ANCOVA would do).

Fig. 9
figure 9

Robust ANCOVA fit on TV show data across treatment and control group. The nonparametric regression lines for both subgroups are shown as well as the OLS fit (dashed lines). The vertical lines show the design points our comparisons are based on

Robust mediation analysis

In this section, we focus on a simple robust mediator model, involving a response Y, a predictor X, and a mediator M, and consisting of the following set of regressions:

$$ \begin{array}{@{}rcl@{}} Y_{i} &=& \beta_{01} + \beta_{11}X_{i} + \varepsilon_{i1}, \\ M_{i} &=& \beta_{02} + \beta_{12}X_{i} + \varepsilon_{i2}, \\ Y_{i} &=& \beta_{03} + \beta_{13}X_{i} + \beta_{23}M_{i} + \varepsilon_{i3} . \end{array} $$

The amount of mediation is reflected by the indirect effectβ12β23 (also called the mediating effect). The state-of-the-art approach to test for mediation (H0: β12β23 = 0) is to apply a bootstrap approach as proposed by Preacher and Hayes (2004).

In terms of a robust mediator model version, instead of OLS, a robust estimation routine needs to be applied to estimate the regression equations above (e.g., an M-estimator as implemented in the rlm function can be used). For testing the mediating effect, Zu and Yuan (2010) proposed a robust approach that is implemented in WRS2 via the ZYmediate function. For technical details, we refer to Zu and Yuan (2010).

The example we use for illustration is taken from Howell (2012), and based on data by Leerkes and Crockenberg (2002). In this dataset (n = 92), the relationship between how girls were raised by their own mother (MatCare) and their later feelings of maternal self-efficacy (Efficacy), that is,our belief in our ability to succeed in specific situations, is studied. The mediating variable is self-esteem (Esteem). All variables are scored on a continuous scale.

In the first part, we fit a standard mediator model with bootstrap-based testing of the mediating effect using the mediation package (Tingley, Yamamoto, Hirose, Keele, & Imai, 2014).


> library("mediation") > fit.mx <- lm(Esteem ~ MatCare, data = Leerkes) > fit.yxm <- lm(Efficacy ~ MatCare + Esteem, data = Leerkes) > set.seed(123) > fitmed <- mediation::mediate(fit.mx, fit.yxm, treat = "MatCare", + mediator = "Esteem", sims = 999, boot = TRUE, boot.ci.type = "bca") > summary(fitmed) ## ## Causal mediation analysis ## ## Nonparametric bootstrap confidence intervals with the BCa method ## ## Estimate 95 ## ACME 0.0531 0.0152 0.10 0.006 ** ## ADE 0.0565 -0.0229 0.13 0.152 ## Total Effect 0.1096 0.0422 0.17 0.002 ** ## Prop. Mediated 0.4843 0.2453 2.68 0.008 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Sample Size Used: 92 ## ## ## Simulations: 999

In this output, the ACME (average causal mediation effect) represents the indirect effect of MatCare on Efficacy, including the 95% bootstrap CI. It suggests that there is a significant mediator effect.

Now we fit this mediation model in a robust way with ZYmediate from WRS2, which uses bootstrap for the CI of the mediation effect as well.


> set.seed(123) > with(Leerkes, ZYmediate(MatCare, Efficacy, + Esteem, nboot = 2000)) ## Call: ## ZYmediate(x = MatCare, y = Efficacy, ## med = Esteem, nboot = 2000) ## ## Mediated effect: 0.0513 ## Confidence interval: 0.016 0.0979 ## p-value: 0.001

For the robust regression setting, we get similar results to OLS. The bootstrap-based robust mediation test suggests again a significant mediator effect.

Note that robust moderator models can be fitted in a similar fashion as ordinary moderator models. Moderator models are often computed on the base of centered versions of predictor and moderator variable, including a corresponding interaction term (see, e.g., Howell, 2012). In R, a basic moderator model can be fitted using lm. A robust version of it can be achieved by replacing the lm call by an rlm call from the MASS package.

Discussion

This article introduced the WRS2 package for computing basic robust statistical methods in a user-friendly manner. Such robust models and tests should be used when certain distributional assumptions, as required by classical statistical methods, cannot be justified. The main focus of the WRS2 package is on simple ANOVA (and related) strategies. For more complex designs, we suggest to consider the following packages. The robustlmm package (Koller, 2016) implements robust mixed-effects models. For instance, if researchers have to deal with more complex between-within subjects settings that go beyond what the bwtrim function offers, robustlmm with its rlmer function is highly attractive. For complex mediator-moderator structures, or robust path models with or without latent variables in general, lavaan (Rosseel, 2012) offers a variety of robust estimators. Some applications are shown in Field and Wilcox (2017).

This said, we will continue to expand WRS2 with user-friendly versions of the raw functions described in Wilcox (2017).