Metafunctions for benchmarking in sensitivity analysis

Reliability Engineering & System Safety

Sensitivity analysis
Machine learning
Author

William Becker

Published

December 1, 2020

Doi

About

Comparison studies of global sensitivity analysis (GSA) approaches are limited in that they are performed on a single model or a small set of test functions, with a limited set of sample sizes and dimensionalities. This work introduces a flexible ‘metafunction’ framework to benchmarking which randomly generates test problems of varying dimensionality and functional form using random combinations of plausible basis functions, and a range of sample sizes. The metafunction is tuned to mimic the characteristics of real models, in terms of the type of model response and the proportion of active model inputs. To demonstrate the framework, a comprehensive comparison of ten GSA approaches is performed in the screening setting, considering functions with up to 100 dimensions and up to 1000 model runs. The methods examined range from recent metamodelling approaches to elementary effects and Monte Carlo estimators of the Sobol’ total effect index. The results give a comparison in unprecedented depth, and show that on average and in the setting investigated, Monte Carlo estimators, particularly the VARS estimator, outperform metamodels. Indicatively, metamodels become competitive at around 10–20 runs per model input, but at lower ratios sampling-based approaches are more effective as a screening tool.

Supporting code is found here, and the metafunction is integrated into the sobolsens package in R.