My first encounter with demography was at the 2018 European Population Conference in Brussels. I was at the start of my PhD, still in the process of defining the overall direction of my thesis. I was an outsider at this event, which made attending the sessions both a little frightening (Will they find me out?), but also exhilarating, since almost every idea, method, and discussion was new to me. I distinctly remember a session on The Future of Demography, where the panellists looked back on their discipline and thought about what should be done next. The discussants talked about how demography was dominated by excellent empirical work, but that it needs to be supported and balanced by also investigating the mechanisms that underlie population processes.

I understood then what was missing in my thesis project: a theoretical framework that would inform the questions I ask, how I asked them, and what the answers meant. My training to date was predominantly grounded in medicine and epidemiology and has taught me to ask questions about individuals: “How do I take care of this patient?”, or “Does exposure A increase the risk of disease B?” I could rely on a body of knowledge in physiology to think about what causes health and disease in individuals. But what is the equivalent source of knowledge when one thinks about health and disease in populations? As an example, let us consider the association between GDP and life expectancy. As I mentioned in my last post, this is an established empirical regularity described by the Preston curve. But why is it there? What are the mechanisms by which living in a wealthier society shapes the physiological processes of thousands of individuals in a way that is measurably different (in the aggregate) from those in poorer societies? Under what circumstances does this occur? Is population health really dependent on societal wealth, or are they both byproducts of other mechanisms?

Theory in demography and its apparent lack have been debated before. The issue depends on how one defines scientific theory. Thomas K. Burch argued for the application of a model-based view of scientific theory to demography. This paradigm defines models as abstract representations of the real world, much like maps. It sees these abstractions as the fundamental unit of science, with theories being either collections of simpler models or models of a large scope (a difference of degree rather than of kind); Burch uses models and theories interchangeably. This view of science is motivated by the position (or perhaps the insight?) that no empirical generalisation is always true in our reality of infinite complexity. The objective of science is therefore uncovering a more limited but perhaps more pragmatic sort of truth. It calls to memory George Box’s famous aphorism: “All models are false, but some are useful”. In Burch’s words:

It is one of the strengths of the model-based view of science that it directs us to use abstract models to study unique events, unlike logical empiricism which requires empirical generalizations about classes of events.

This view of reality, models, and science has important consequences for the evaluation of models and their falsification. Burch writes:

A model is a good model – Giere would not say a ‘true’ model – if it fits some portion of the real world (1) closely enough, (2) in certain respects, (3) for a specific purpose. All models are approximations. The question is whether the approximation is good enough for the purpose at hand. All models have a limited number of variables; none can mirror the numberless qualities of the real world. And finally, any model is to be evaluated with reference to the purpose for which it was designed or constructed.

The fit of a model to the real world is a matter for empirical examination. It is this empirical research that links model and data. But the conclusion that a model does not fit a particular case – perhaps not even closely – is only a conclusion that the model does not fit, not that the model is inherently false or invalid. It may well fit other cases. Decisions about whether or how well models fit the real world are based on scientific judgement, not on purely logical criteria.

This opens up a world of different possible models/theories, ranging from simple to complex. One distinction that I found valuable is between phenomenological models and fundamental ones. Phenomenological models describe what occured or predict what will occur next (e.g., Newton’s theory of gravity). Fundamental models also propose causes and mechanisms of action (e.g., gravitons), and explain why something occured and, crucially, what we can do about it. This introduces the concept of theory as an explanation of social phenomena and as a way to understand them.

What does it mean to understand social phenomena? In the essay Prediction and explanation in social systems, Hofman and colleagues point out that understanding in this context could mean at least two things: interpreting a phenomenon, or accounting for empirical regularities. The first meaning is closely related to sense-making, constructing a narrative about the event that provides a satisfying explanation of what had happened and why. The second meaning is often related to crafting a quantitative model of the phenomenon, which can then be used to predict its occurence in the future. The two may not overlap. We may explain why a phenomenon occured, but have no idea of how to predict it in the future. Conversely, we may build an algorithm that is successful at predicting an outcome, but we may find it difficult to explain how it works (cf., deep learning and the notion of “black boxes”).

Phenomena lie on a continuum of how well they can be predicted. It ranges from relatively regular (e.g., Haley’s comet) to largely unpredictable phenomena, which are also called “black swans” (e.g., onset of the Great Recession). What differentiates phenomena along this continuum is the amount of randomness that goes into generating them. The example that Hofman and colleagues give is predicting success. In a world where success is determined purely by skill, predicting success would be limited solely by our ability to measure skill. However, in a world where success is by and large the consequence of luck, success will remain (also theoretically) unpredictable. How to establish the underlying balance between randomness and determinism and the theoretical limits to prediction remains unclear in the general case.

Returning to the disctiction between prediction and explanation, Hofman and colleagues emphasise that neither of the two approaches to understanding social phenomena is inherently more correct, although the latter tended to dominate until relatively recently. Increasing amounts of data related to social phenomena and the increased computational capacity available made computational social science and prediction-as-explanation a practicable approach to building models of reality. What matters is that both approaches are valuable, they may not involve as much trade-off as traditionally thought (cf., explainable AI), and complement each other. Hofman and colleagues write:

None of this is to suggest that complex predictive modeling should supplant traditional approaches to social science. Rather, we advocate a hybrid approach in which researchers start with a question of substantive interest and design the prediction exercise to address that question, clearly stating and justifying the specific choices made during the modeling process. These requirements do not preclude exploratory studies, which remain both necessary and desirable for a variety of reasons — for example, to deepen understanding of the data, to clarify conceptual disagreements or ambiguities, or to generate hypotheses. When evaluating claims about predictive accuracy, however, preference should be given to studies that use standardized benchmarks that have been agreed upon by the field or, alternatively, to confirmatory studies that preregister their predictions. Mechanisms revealed in this manner are more likely to be replicable, and hence to qualify as “true,” than mechanisms that are proposed solely on the basis of exploratory analysis and interpretive plausibility. Properly understood, in other words, prediction and explanation should be viewed as complements, not substitutes, in the pursuit of social scientific knowledge.

I only scratched the surface of model-based view of science, the philosophy of social explanation, and the ideas and approaches of computational social science. But these perspectives helped me construct a useful lens through which I can view my own research interests: mortality convergence in the early 21st century European Union. I now see that building a model of mortality convergence in a defined context (European Union, 1980-2020) is a legitimate way of approaching the issue. Reading the above material also motivated me to explore novel methods like machine learning and computer simulation to complement the more established methods of demographic analysis. Finally, it helps me relate to previous work on mortality convergence in Europe, to which I will turn in a future post.