Contacts
Research Decision sciences

Mind the Assumptions to Obtain Meaningful Scientific Models

, by Ezio Renda
Emanuele Borgonovo looks at how scientific theories and model reliability relate to each other. He highlights the importance of measuring uncertainty and rigorously testing models for sensitivity

Scientists use mathematical models to increase their understanding of complex systems. Scientific panels use the output from these models to guide policy decisions on climate change and pandemics.

The ideal situation for scientific modeling occurs when three elements are present simultaneously: a theory, which consists of a set of hypotheses and statements describing a system and from which propositions and theorems about the system's behavior can be derived; a mathematical model (a set of equations) derived from this theoretical framework; and a real-world phenomenon whose behavior is accurately described by the mathematical model. An illustrious example is Einstein's famous result based on the theory of general relativity, which led to the development of the equation (the mathematical model) for the curvature of solar light due to the presence of celestial bodies like the moon. Experiments performed years later proved that solar rays followed Einstein's equations.

In other situations, the three vertices of this triangle are not present, often leading to a situation described as the "absence of theory. In these cases, scientists select a math model based on their knowledge of the problem, make assumptions about parameter values, and compare predictions with real data. "Simplifying excessively," says Emanuele Borgonovo, Director of the Department of Decision Sciences at Bocconi, "We make assumptions about the values of parameters processed by the model to produce its response based on our knowledge of the problem. Then, we verify these hypotheses to ensure the model is usable in a given situation."

In a book featuring essays on the politics of modeling, Borgonovo recently contributed a chapter on the importance of assumptions. "The greatest risk," Borgonovo asserts, "is the so-called cherry-picking: the possibility that the researcher, needing to simplify reality in some way, selects hypotheses in a manner that helps demonstrate the validity of a predetermined thesis."

To guard against uncritical model use, Borgonovo emphasizes that best practices suggest always conducting quantification of uncertainty and sensitivity analysis, which help understand how and to what extent model results change with varying assumptions. If a parameter's value is highly uncertain and small differences in that parameter lead to significant differences in the outcome, an alarm bell should ring. "Fundamentally, the researcher should always systematically test the model's behavior," Borgonovo states. The literature identifies four main types of guidance: factor prioritization (identifying parameters that most influence the model's response), trend determination (determining the direction of the model's response), interaction quantification (quantifying the relevance of interactions), and stability analysis (determining the robustness of the model's response).

In recent years, with the development of artificial intelligence and the proliferation of big data, "data models" have become increasingly common. These are sets of algorithms and complex mathematical equations that relate inputs to outputs without an underlying theory, and therefore, no hypothesis. In this case, the risk is not understanding how the model works and having to use it as a black box. Understanding the underlying assumptions and quantifying uncertainty in the model's response becomes even more critical.

Emanuele Borgonovo, "Mind the Assumptions. Quantify Uncertainty and Assess Sensitivity," in Andrea Saltelli and Monica di Fiore (eds.), The Politics of Modelling. Numbers Between Science and Policy, Oxford University Press, 2023.