Journal of Ecology recently published new research article by Clark et al. Predicting species abundances in a grassland biodiversity experiment: Trade‐offs between model complexity and generality
Author Adam Clark discusses the paper in more detail and explores the trade‐off between bias and variance when modelling ecological systems.
It is common knowledge that increasing the number of parameters in a regression improves model performance (e.g. R2), at least when performance is tested against the same dataset that was used to parameterise the model (i.e. “within-sample” performance). Perhaps less well known is that this increased complexity is often associated with decreased performance outside of the range of conditions used for parameterisation (i.e. “out-of-sample” performance). This statistical phenomenon, known as the “bias-variance trade-off”, arises because datasets include both general phenomena, and particularities that occur in that dataset alone. When models are tested against the same data that were used to parameterise them, increased complexity can always reduce uncertainty (i.e. variance) by tuning estimates to match observations. However, too much parameter tuning draws predictions towards these peculiarities (i.e. bias), which reduces model generality, and therefore decreases out-of-sample performance due to “over-fitting”.
This trade-off between bias and variance is a particularly common problem for models that describe interactions among species in diverse communities. These models can potentially include large numbers of parameters, e.g. describing the effects of intra- vs. interspecific competition, differential effects of competition for each pairwise combination of species, and even “high order” terms that modify pairwise effects as a function of community composition. Consequently, designing models that properly balance within- vs. out-of-sample performance is especially important, particularly in cases where predictions are meant to be generalisable across multiple sites and systems.
In a study recently published in Journal of Ecology, Clark et al. analyse data from the Jena “dominance” experiment to identify optimal levels of complexity along this trade-off. This experiment, located in Jena, Germany, includes nine locally abundant herbaceous plant species, sown along a diversity gradient of 1, 2, 3, 4, 6, or 9 species. These data were used to parametrise six models representing increasingly complex hypotheses about species interactions, ranging from intraspecific competition alone, to “higher order” models of context-specific pairwise interactions. Finally, both within-sample and out-of-sample performance was calculated for these models after parameterising them using two different subsets of data: (1) the minimum number of diversity treatments needed for parameterisation (i.e. monocultures or two-species mixtures), and (2) diversity treatments spanning the full range of experimental conditions (plots sown with one, two, or nine species).
In accordance with expectations from the bias-variance trade-off, out-of-sample performance was generally best for models of intermediate complexity (i.e. with only two interaction coefficients per species – an intraspecific effect and a single pooled interspecific effect), especially for predictions that fell outside the range of diversity treatments used for parameterisation. More complex models typically provided the best performance for within-sample estimates, and when parameterised using data from the full range of experimental diversity treatments. However, while the performance of complex models dropped sharply in the out-of-sample cases, models of intermediate complexity always performed similarly to the best-fitting models.
Three general lessons can be drawn from this study. First, in many cases, models of intermediate complexity are likely to perform almost as well as more complex models. In particular, for models of species interactions, general models of intraspecific vs. interspecific interactions may often produce almost identically good within-sample estimates compared with more parameter rich models that include separate terms describing pairwise or higher order interactions. Second, even when they provide significantly better within-sample performance, highly complex models may often perform especially poorly out-of-sample, especially outside of the range of conditions used for parametrisation. Finally, although generality is often a primary goal in ecology, many of the methods commonly used to compare model performance (e.g. AIC, or leave-one-out cross-validation) are more closely related to within-sample than to out-of-sample performance. Taken together, these lessons suggest that it may be wise to address complexity judiciously, only after we have convinced ourselves that simpler models fail to provide sufficient precision, and that complex models perform well across the full range of conditions that we wish to consider.
Adam Clark, German Centre for Integrative Biodiversity Research (iDiv), Germany
Read the full open access article online: Predicting species abundances in a grassland biodiversity experiment: Trade‐offs between model complexity and generality