I met Ezekiel Bulver last month

Richard Feynman once said, “I’ve concluded that it’s not a scientific world.” He observed that people often believe so many wonderful things that the real message behind the scientific method has failed to percolate through societya method of systematic doubt in which nothing is ever certain and concepts only ever reside on a graduated scale somewhere between, but never at the ends of, absolute falsity or absolute truth. He saved his worst scorn for the many scientists who, by their training, are supposed to know better. In my experience, little has changed in the nearly half century since his remarks.

I have the opportunity to share my work on COVID-19 and interact with epidemiologists from around the country and around the world, yet the most shocking part of my experience is running into Ezekiel Bulver. C.S. Lewis met him first:

...Ezekiel Bulver, whose destiny was determined at the age of five when he heard his mother say to his father—who had been maintaining that two sides of a triangle were together greater than a third— “Oh you say that because you are a man.” “At that moment”, E. Bulver assures us, “there flashed across my opening mind the great truth that refutation is no necessary part of argument. Assume that your opponent is wrong, and explain his error, and the world will be at your feet. Attempt to prove that he is wrong or (worse still) try to find out whether he is wrong or right, and the national dynamism of our age will thrust you to the wall.” That is how Bulver became one of the makers of the Twentieth Century.

Of course Bulver never existed, he is a rhetorical device created by C.S. Lewis himself, but this behaviour, bulverismthe logical fallacy of assuming someone’s argument is invalid or false at the outset, and then attempting to explain how the person became so mistaken by hypothesizing about her beliefs, psychology or motives, with no regards to the actual argument itselfhas no place in scientific discourse. Bulversim gets us nowhere. The other side could just as easily use the argument against you. Not only is bulverism disrespectful but it’s pure foolishness. C.S. Lewis again,

If you try to find out which [ideas] are tainted by speculating about the wishes of the thinkers, you are merely making a fool of yourself. You must first find out on purely logical grounds which of them do, in fact, break down as arguments. Afterwards, if you like, go on and discover the psychological causes of the error...You must show that a man is wrong before you start explaining why he is wrong.

Bulverism is antithetical to the scientific method and scientists who use it have stopped being scientists. You would’ve embarrassed yourself in front of Feynman practicing it. Arguing that papers or ideas on COVID-19 from highly respectable epidemiologists and other researchers, who argue in good faith, should be ignored at face value because of perceived ideology or inferred political beliefs is not science. There is plenty of uncertainty around COVID-19 and plenty of room for legitimate scientific disagreement. If we want to serve the public good, let’s stop inviting Mr. Bulver to the conversation.

Are lockdowns effective at stopping Covid-19?

My data science team continues to research COVID-19 propagation and measures that we can take in work environments to limit spread. We keep a sharp eye on the literature for interesting and novel statistical techniques applied to COVID-19 data and we recently came across a wonderful paper by Simon N. Wood. Readers of this blog might recognize Professor Wood’s name from a previous blog post where I promoted his book on Generalized Additive Models.

In his new paper Did COVID-19 infections decline before the UK lockdown?, Professor Wood examines the arrival dates of fatal infections across the England and Wales and determines when fatal infections peaked. He finds that fatal infections were in substantial decline at least five or six days before the lockdowns started. Furthermore, he finds that the fatal infection profile does not exhibit a regime change around the lockdown date and that the profile for England and Wales follows a similar trajectory as Sweden. The result here is important because Professor Wood focuses on the most reliably collected data – deaths due to COVID-19. Studies that focus on case counts to infer epidemiological parameters are always compromised by data that is highly truncated and censored, often in ways that are largely unknown to the researcher. While we can gain some insight from such data, results are often as informed by prior beliefs as much as by the data itself leaving us in an unfortunate position for constructing scientifically based policy.

Death data are different. In this case, the clinical data directly measure the epidemiological quantities of interest. Death data from COVID-19, while not perfect, are much better understood and recorded than other COVID-19 quantities. To understand the effect of interventions from lockdowns, what can we learn from the arrival of fatal infections without recourse to strong modelling or data assumptions? This is where Professor Wood’s paper really shines.

Before discussing Professor Wood’s paper and results, let’s take a trip down epidemiological history lane. In September 1854, London experienced an outbreak of cholera. The outbreak was confined to Broad Street, Golden Square, and adjoining streets. Dr. John Snow painstakingly collected data on infections and deaths, and carefully curated the data into geospatial representations. By examining the statistical patterns in the data, questioning outliers, and following up with contacts, Dr. Snow traced the origin of the outbreak to the Broad Street public water pump. He made the remarkable discovery that cholera propagated through a waterborne pathogen. The handle to the pump was removed on September 8, 1854, and the outbreak subsided.

But did removing the pump handle halt the cholera outbreak? As a cause and effect statement, Dr. Snow got it right, cholera transmission occurs through contaminated water, but evaluation of the time series data show that the removal of the handle of the Broad Street water pump is not conclusively linked to the cause of the outbreak subsiding. Edward Tufte has a wonderful discussion of the history of Dr Snow’s statistical work in Visual Explanations. 5th edition. Cheshire, Connecticut: Graphics Press, 1997, Chapter 2, Visual and Statistical Thinking: Displays of Evidence for Making Decisions. Let’s look at the time series of deaths in the area of London afflicted by the cholera outbreak in the plots below.

From Tufte: Visual Explanations

We clearly see that deaths were on the decline prior to the pump handle’s removal. People left the area, and people modified their behaviour. While the removal of the pump handle probably prevented future outbreaks and Dr. Snow’s analysis certainly contributed heavily to public health, it’s far from clear that the pump handle’s removal was a leading cause in bringing the Broad Street outbreak under control. Now, if we aggregate the data we can make it look like removing the pump handle was the most important effect. See the lower plot in the above figure. Tufte shows what happens if we aggregate on a weekly basis, and the confounding becomes even greater if we move the date ahead by two days to allow for the lag between infection and death. With aggregation we arrive at a very misleading picture, all an artifact of data manipulation. Satirically, Tufte imagines what our modern press would have done with Dr. Snow’s discovery and the public health intervention of removing the handle with the following graphic:

From Tufte: Visual Explanations

Fast forward to 2020 – Professor Wood is our modern day Dr. Snow. The ultimate question that Professor Wood seeks to answer is: When did the arrival of fatal infections peak? He is looking to reconstruct the time course of infections from the most reliable data sources available. We know from the COVID-19 death data that deaths in the UK eventually declined after the lockdowns came into effect (March 24, 2020) which seems to point to the effectiveness of the intervention. But an infection that leads to a fatality takes time. Professor Wood builds a model, without complex assumptions, to account for this lag and infer the infection date time series. He works with COVID-19 death data from the Office of National Statistics for England and Wales, the National Health System hospital data, and the Folkhälsomyndigheten daily death data for Sweden. In the figure below we see his main result: In the UK, COVID-19 fatal infections were in decline prior to the lockdowns, peaking 5 to 6 days earlier. The UK profile follows Sweden which did not implement a lockdown.

From Simon N. Wood: https://arxiv.org/abs/2005.02090

The technique he uses is rather ingenious. He uses penalized smoothing splines with a negative binomial counting process, while allowing for weekly periodicity. The smooth that picks up the trend in deaths is mapped back to the arrival of the fatal infections using the distribution of infection to death. Based on hospitalization data and other sources, the distribution is well described by a lognormal with a mean of 26.8 days and standard deviation of 12.4 days. The mapping matrix that uses the distribution is near singular but the smoothing penalty handles this problem.

One might be tempted to think that the time series reconstruction might be biased in the sense that an intervention will always see the peak behind the intervention date and that the distribution of time until death from a fatal infection smears the the peak backward. Thus, we might be fooled into believing that a peak with a decline through the intervention date might not be caused by the intervention when in fact the effect was generated by the intervention with a sharp discontinuity. Professor Wood model checks with simulated data in which fatal infections arrive at high rate and then plummet at a lockdown intervention. He then tests how well the method captures the extreme discontinuity. We can see that method does very well in picking up the discontinuity in the figure below.

From Simon N. Wood: https://arxiv.org/abs/2005.02090

There are issues that could undermine the conclusions and Wood expounds on them in his paper. The problem of hospital acquired infections is important. People already in the hospital are often weak and frail and thus the duration of COVID-19 until death will be shortened should they become fatally infected. Professor Wood is focusing on community transmission since it is this effect that lockdowns and social distancing targets. Hospital acquired transmissions will bias the inference, but the proportion of hospital acquired infections in the death data would have to be quite high for it to radically alter the conclusions of Wood’s results. He discusses a mixture model to help understand this effect. There are also problems concerning bias in the community acquired fatal disease duration including the possibility of age dependent effects. Again, to substantially change the conclusions, the effects would have to be large.

Professor Wood is careful to point out that his paper does not prove that peak fatal infections occurred in England and Wales prior to the lockdowns. But the results do show that in the absence of strong assumptions, the most reliable data suggest that fatal infections in England and Wales were in decline before the lockdowns came into effect with a profile similar to that of Sweden. Like Dr. Snow’s pump handle, the leading effects that caused the decline in deaths in the UK may not have been the lockdowns, but the change in behaviour that had already started by early March, well before the lockdowns.

Professor Wood’s results may have policy implications and our decision makers would be wise to include his work in their thinking. We should look to collect better data and use similar analysis to understand what the data tell us about the effectiveness of any public health initiative. At the very least, this paper weakens our belief that the blunt instrument of lockdowns is the primary mechanism by which we can control COVID-19. And given the large public health issues that lockdowns also cause – everything from increased child abuse to future cancer patients who missed routine screening to increasing economic inequality – we must understand the tradeoffs and the benefits of all potential actions to the best of our ability.

Covid-19 branching model: details and calibration to data

Last month my collaborators, Jerome Levesque and David Shaw, and I built a branching process model for describing Covid-19 propagation in communities. In my previous blog post, I gave a heuristic description of how the model works. In this post, I want to expand on some of the technical aspects of the model and show how the model can be calibrated to data.

The basic idea behind the model is that an infected person creates new infections throughout her communicable period. That is, an infected person “branches” as she generates “offspring”. This model is an approximation of how a communicable disease like Covid-19 spreads. In our model, we assume that we have an infinite reservoir of susceptible people for the virus to infect. In reality, the susceptible population declines – over periods of time that are much longer than the communicable period, the recovered population pushes back on the infection process as herd immunity builds. SIR and other compartmental models capture this effect. But over the short term, and especially when an outbreak first starts, disease propagation really does look like a branching process. The nice thing about branching processes is that they are stochastic, and have lots of amazing and deep relationships that allow you to connect observations back to the underlying mechanisms of propagation in an identifiable way.

In our model, both the number of new infections and the length of the communicable period are random. Given a communicable period, we model the number of infectious generated, Q(t), as a compound Poisson process,

(1)   \begin{equation*}Q(t) = \sum_{i=1}^{N(t)} \, Y_i,\end{equation*}

where N(t) is the number of infection events that arrived up to time t, and Y_i is the number infected at each infection event. We model Y_i with the logarithmic distribution,

(2)   \begin{equation*}\mathbb{P}(Y_i =k) = \frac{-1}{\ln(1-p)}\frac{p^k}{k}, \hspace{2em} k \in {1,2,3,\ldots}.\end{equation*}

which has mean, \mu = -\frac{p}{(1-p)\ln(1-p)}. The infection events arrive exponentially distributed in time with arrival rate \lambda. The characteristic function for Q(t) reads,

(3)   \begin{align*}\phi_{Q(t)}(u) &=\mathbb{E}[e^{iuQ(t)}] \\ &= \exp\left(rt\ln\left(\frac{1-p}{1-pe^{iu}}\right)\right) \\ &= \left(\frac{1-p}{1-pe^{iu}}\right)^{rt},\end{align*}

with \lambda = -r\ln(1-p) and thus Q(t) follows a negative binomial process,

(4)   \begin{equation*}Q(t) \sim \mathrm{NB}(rt,p).\end{equation*}

The negative binomial process is important here. Clinical observations suggest that Covid-19 is spread mostly by a minority of people in large quantities. Research suggests that the negative binomial distribution describes the number of infections from infected individuals. In our process, during a communicable period, t, an infected individual infects Q(t) people based on a draw from the negative binomial with mean rtp/(1-p). The infection events occur continuously in time according to the Poisson arrivals. However, the communicable period, t, is in actuality a random variable, T, which we model as a gamma process,

(5)   \begin{equation*}f_{T(t)}(x) = \frac{b^{at}}{\Gamma(at)} x^{at-1}e^{-b x},\end{equation*}

which has a mean of \mean{T} = at/b. By promoting the communicable period to a random variable, the negative binomial process changes into a Levy process with characteristic function,

(6)   \begin{align*}\mathbb{E}[e^{iuZ(t)}] &= \exp(-t\psi(-\eta(u))) \\ &= \left(1- \frac{r}{b}\ln\left(\frac{1-p}{1-pe^{iu}}\right)\right)^{-at},\end{align*}

where \eta(u), the Levy symbol, and \psi(s), the Laplace exponent, are respectively given by,

(7)   \begin{align*}\mathbb{E}[e^{iuQ(t)}] &= \exp(t\,\eta(u)) \\\mathbb{E}[e^{-sT(t)}] &= \exp(-t\,\psi(s)), \end{align*}

and so,

(8)   \begin{align*}\eta(u) &= r\ln\left(\frac{1-p}{1-pe^{iu}}\right), \\\psi(s) &= a\ln\left(1 + \frac{s}{b}\right).\end{align*}

Z(t) is the random number of people infected by a single infected individual over her random communicable period and is further over-dispersed relative to a pure negative binomial process, getting us closer to driving propagation through super-spreader events . The characteristic function in eq.(6) for the number of infections from a single infected person gives us the entire model. The basic reproduction number R_0 is,

(9)   \begin{align*}R_0 &= \left(\frac{at\lambda}{b}\right)\left(\frac{-p}{\ln(1-p)(1-p)}\right).\end{align*}

From the characteristic function we can compute the total number of infections in the population through renewal theory. Given a random characteristic \chi(t), such as the number of infectious individuals at time t, (e.g., \chi(t) = \mathbb{I}(t \in [0,\lambda_x)) where \lambda_x is the random communicable period) the expectation of the process follows,

(10)   \begin{equation*}\mathbb{E}(n(t)) = \mathbb{E}(\chi(t)) + \int_0^t\mathbb{E}(n(t-u))\mathbb{E}(\xi(du)).\end{equation*}

where \xi(du) is the counting process (see our paper for details). When an outbreak is underway, the asymptotic behaviour for the expected number of counts is,

(11)   \begin{equation*}\mathbb{E}(n_\infty(t)) \sim \frac{e^{\alpha t}}{\alpha\beta},\end{equation*}

where,

(12)   \begin{align*}\alpha &= \lambda\mu \left(1 - \left(\frac{b}{\alpha +b}\right)^{at}\right) \\\beta & = \frac{1}{\alpha}\left(1 - \frac{at\lambda \mu}{b}\left(\frac{b}{\alpha +b}\right)^{at+1}\right).\end{align*}

The parameter \alpha > 1 is called the Malthusian parameter and it controls the exponential growth of the process. Because the renewal equation gives us eq.(11), we can build a Bayesian hierarchical model for inference with just cumulative count data. We take US county data, curated by the New York Times, to give us an estimate of the Malthusian parameter and therefore the local R-effective across the United States. We use clinical data to set the parameters of the gamma distribution that controls the communicable period. We treat the counties as random effects and estimate the model using Gibbs sampling in JAGS. Our estimation model is,

(13)   \begin{align*}\log(n) &= \alpha t + \gamma + a_i t + g_i + \epsilon \nonumber \\a_i &\sim \text{N}(0,\sigma_1^2) \nonumber \\g_i & \sim \text{N}(0,\sigma_2^2) \nonumber \\\epsilon &\sim \text{N}(0,\sigma^2),\end{align*}

where i is the county label; the variance parameters use half-Cauchy priors and the fixed and random effects use normal priors. We estimate the model and generate posterior distributions for all parameters. The result for the United States using data over the summer is the figure below:

Summer 2020 geographical distribution of R-effective across the United States: 2020-07-01 to 2020-08-20.

Over the mid-summer, we see that the geographical distribution of R_{eff} across the US singles out the Midwestern states and Hawaii as hot-spots while Arizona sees no county with exponential growth. We have the beginnings of a US county based app which we hope to extend to other countries around the world. Unfortunately, count data on its own does not allow us to resolve the parameters of the compound Poisson process separately.

If we have complete information, which might be possible in a small community setting, like northern communities in Canada, prisons, schools, or offices, we can build a Gibbs sampler to estimate all the model parameters from data without having to rely on the asymptotic solution of the renewal equation.

Define a complete history of an outbreak as a set of N observations taking the form of a 6-tuple:

(14)   \begin{equation*}(i,j,B_{i},D_{i},m_{i},o_{i}),\end{equation*}

where,

i: index of individual, j: index of parent, B_{i}: time of birth, D_{i}: time of death, m_{i}: number of offspring birth events, o_{i}: number of offspring.

With the following summary statistics:

(15)   \begin{align*}L & = \sum_{i} D_{i} - B_{i};\,\,  \Lambda  = \prod_{i} (D_{i} - B_{i}) \nonumber \\ M & = \sum_{i} m_{i};\,\,  O = \sum_{i} o_{i} \nonumber \end{align*}

we can build a Gibbs sampler over the models parameters as follows:

(16)   \begin{align*}p\,|\,r,L,O & \sim \text{Beta}\left(a_{0} + O,b_{0} + r L\right) \nonumber \\r\,|\,p,L,M & \sim \text{Gamma}\left(\eta_{0}+M,\rho_{0}-L\log(1-p)\right) \nonumber \\b\,|\,a,L,N & \sim \text{Gamma}\left(\gamma_{0}+aN,\delta_{0}+L\right)\nonumber \\a\,|\,b,\Lambda,N & \sim\text{GammaShape}\left(\epsilon_{0}\Lambda,\zeta_{0}+N,\theta_{0}+N\right)\end{align*}

where a_0, b_0, \eta_0, \rho_0, \gamma_0, \zeta_0, \epsilon_0, \theta_0 are hyper-parameters.

Over periods of time that are comparable to the communicable window, such that increasing herd immunity effects are negligible, a pure branching process can describe the propagation of Covid-19. We have built a model that matches the features of this disease – high variance in infection counts from infected individuals with a random communicable period. We see promise in our model’s application to small population settings as an outbreak gets started.

An effective theory of Covid-19 propagation

To fight Covid-19, we need an understanding of how the virus propagates in a community. Right now, the workhorse engine in the literature and in the media are compartmental models: (S)usceptible (I)nfected (R)ecovered and its cousins. The most popular versions of these models start with a set of coupled ordinary differential equations which, when solved, generate paths for each compartment. For example, in the simple SIR model, the infection starts with a large susceptible population which diminishes as the infected population rises. The infected population eventually declines as the recovered population increases and eventually the infected population goes to zero as the outbreak ends. The differential equations govern the dynamics of each compartment, generating equations of motion for the population.

Covid-19 propagation as a gamma subordinated negative binomial branching process

SIR models work well when we are discussing large populations, when we are interested in population averages, and when the random nature of the transmission doesn’t particularly matter. The solution to the coupled set of differential equations is deterministic. But when an outbreak gets started or when we are interested in the dynamics in the small population limit, we need more than deterministic models. SIR compartmental model with its focus on averages is not enough when dealing with the very early stages of an outbreak – and it’s the early stages where we really want our mitigation strategies to be the most successful. We need a stochastic model of transmission to understand the early stages of an outbreak.

Together with my colleagues Jerome Levesque, and David Shaw, we built a branching model of Covid-19 propagation. The idea is that an infected person randomly infects other people over the course of the communicable period. That is, we model transmission by imagining that an infected person generates “offspring”, continuous in time, during the communicable period and that each “child” has the same statistical law for generate more “offspring”. The infections branch out from each infected person into a tree that makes up the infected community. So while on average an infected person will infect R0 other people (the basic reproduction number) during the communicable period, there are a range of possible outcomes. We could get lucky and an initially infected person might not spread the virus at all, or we could get unlucky and the initially infected person might become a super-spreader, all in a model with the same R0. In fact, even with R0>1, there can be a substantial probability that the outbreak will go extinct on its own, all depending on the statistical branching nature of transmission.

In some research communities, there is a tendency to use agent based models to capture the stochastic nature of an outbreak. Such models simulate the behaviour of many different individuals or “agents” in a population by assigning a complicated set of transmission rules to each person. In the quest for high fidelity, agent based models tend to have lots of parameters. While agent based approaches have merit, and they have enjoyed success in many fields, we feel that in this context these models are often too difficult to interpret, contain many layers of hidden assumptions, are extraordinarily difficult to calibrate to data while containing lots of identifiability issues, are easily susceptible to chaotic outputs, and obscure trade-off analysis for decision makers. In a decision making context we need a parsimonious model, one that gets the essentials correct and generates insight for trade-off analysis that decision makers can use. We need an effective theory of Covid-19 propagation in which we project all the complicated degrees of freedom of the real world down to a limited number of free parameters around which we can build statistical estimators.

The abstract of our paper:

We build a parsimonious Crump-Mode-Jagers continuous time branching process of Covid-19 propagation based on a negative binomial process subordinated by a gamma subordinator. By focusing on the stochastic nature of the process in small populations, our model provides decision making insight into mitigation strategies as an outbreak begins. Our model accommodates contact tracing and isolation, allowing for comparisons between different types of intervention. We emphasize a physical interpretation of the disease propagation throughout which affords analytical results for comparison to simulations. Our model provides a basis for decision makers to understand the likely trade-offs and consequences between alternative outbreak mitigation strategies particularly in office environments and confined work-spaces.

We focus on two parameters that decision makers can use to set policy: the average waiting time between infectious events from an infected individual, and the average number of people infected at an event. We fix the communicable period (distribution) from clinical data. Those two parameters go into the probabilistic model for branching the infection through the population. The decision maker can weigh trade-offs like restricting meeting sizes and interaction rates in the office while examining the extinction probabilities, growth rates, and size distributions for each choice.

You can find our paper here: https://medrxiv.org/cgi/content/short/2020.07.08.20149039v1

Covid-19 and decisions under uncertainty

Three excellent essays recently appeared in the Boston Review by Jonathan Fuller, John Ioannidis, and Marc Lipsitch on the nature of epidemiology, and the use of data in making public health decisions. Each essay makes great points, especially Professor Ioannidis’ emphasis that Covid-19 public health decisions constitute trade-offs – other people will die based on our decisions to mitigate Covid-19. But I think all of all three essays miss the essential question, which transcends Covid-19 or even public health:

What is the optimal way to make irreversible decisions under uncertainty?

The answer to this question is subtle because it involves three competing elements: time, uncertainty, and irreversibility. In a decision making process, time gives us the opportunity to learn more about the problem and remove some of the uncertainty, but it’s irreversibility that makes the problem truly difficult. Most important problems tend to have a component of irreversibility. Once we make the decision, there is no going back or it is prohibitively expensive to do so, and our opportunity to learn more about the problem is over.

Irreversibility coupled with uncertainty and time means there is value in waiting. By acting too soon, we lose the opportunity to make a better decision, and by waiting too long, we miss the opportunity altogether. Waiting and learning incurs a cost, but that cost is often more than offset by the chance to make a better and more informed decision later. The value of waiting in finance is part of option pricing and that value radically changes optimal decision making. There is an enormous amount of research on option valuation with irreversible decisions. The book Investment Under Uncertainty by Avinash Dixit and Robert Pindyck provides a wonderful introduction to the literature. When faced with an irreversible decision, the option value can be huge, even dwarfing the payoff from immediate action. At times, learning is the most valuable thing we can do. But for the option to have value, we must have time to wait. In now-or-never situations, the option loses its value completely simply because we have no opportunity to learn. The take-a-way message is this: the more irreversible the decision, the more time you have, and the more uncertainty you face, the larger the option value. The option value increases along each of these dimensions, thereby increasing the value of waiting and learning.

Covid-19 has all of these ingredients – time, uncertainty, and irreversibility. Irreversibility appears through too many deaths if we wait too long, and economic destruction requiring decades of recovery if we are too aggressive in mitigation (while opening up global financial risks). There is a ton of uncertainty surrounding Covid-19 with varying degrees of limited time windows in which to act.

Those who call for immediate and strong Covid-19 mitigation strategies recognize irreversibility – we need to save lives while accepting large economic costs – and that even though we face enormous uncertainty, the costs incurred from waiting are so high compared to immediate action that the situation is ultimately now-or-never. There is no option value. Those who call for a more cautious and nuanced approach also see the irreversibility but feel that while the costs from learning are high and time is short, the option value is rescued by the enormous uncertainty. With high uncertainty, it can be worth a lot to learn even a little. Using the lens of option valuation, read these two articles by Professor Ioannidis and Professor Lipsitch from this March and you can see that the authors are actually arguing over the competing contributions of limited time and high uncertainty to an option’s value in an irreversible environment. They disagree on the value of information given the amount of time to act.

So who’s right? In a sense, both. We are not facing one uncertain irreversible decision; we face a sequence of them. When confronted by a new serious set of problems, like a pandemic, it can be sensible to down-weight the time you have and down-weight the uncertainty (by assuming the worst) at the first stage. Both effects drive the option value to zero – you put yourself in the now-or-never condition and you act. But for the next decision, and the next one after that, with decreasing uncertainty over time, you change course, and you use information differently by recognizing the chain of decisions to come. Johan Giesecke makes a compelling argument about the need for a course change with Covid-19 by thinking along these lines.

While option valuation can help us understand the ingredients that contribute to waiting, the uncertainty must be evaluated over some probability measure, and that measure determines how we weigh consequences. There is no objectively correct answer here. How do we evaluate the expected trade-off between excess Covid-19 deaths among the elderly vs a lifetime of lost opportunities for young people? How much extra child abuse is worth the benefit of lockdowns? That weighing of the complete set of consequences is part of the totality of evidence that Professor Ioannidis emphasizes in his essay.

Not only does time, uncertainty, and irreversibility drive the option value, but so does the probability measure. How we measure consequences is a value judgment, and in a democracy that mesaure must rest with our elected officials. It’s here that I fundamentally disagree with Professor Lipsitch. In his essay, he increasingly frames the philosophy of public health action in terms of purely scientific questions. But public action, the decision to make one kind of costly trade-offs against another – and it’s always about trade-offs – is a deeply political issue. In WWII, President Truman made the decision to drop the bomb, not his generals. Science can offer the likely consequences of alternative courses of public health actions, but it is largely silent on how society should weigh them. No expert, public health official, or famous epidemiologist has special insight into our collective value judgment.

I am NOT the decider: the limits to science in public policy and decision making

In the last decade, Western politicians and government officials have made evidence-based decision making a key plank in their platforms and operations. From climate change to Covid-19, governments around the world are increasingly leaning on scientists and other experts to help form policy. I welcome scientific input; without it we are blind. At the same time I fear that we sometimes expect too much from science. Science cannot answer moral questions, and it cannot determine our values.

Parliament: Our collective decision making home.

Today, within some circles of our chattering classes it’s in vogue to complain that our democracies are too slow, too ineffectual, and too unresponsive; that somehow how, an administrative state run by experts and only lightly guided by politicians will offer superior results. But for all its shortcomings and imperfections in process, accountability from the election booth provides the best mechanism to ensure that our collective decision making lines up with our collective values. We invest the power of decision making in our elected officials for a reason – we demand that our leaders take responsibility and we then we make them accountable.

Science can never replace public decision making. How many of our civil liberties should we suspend to fight Covid-19? How much global warming is worth extra economic growth? How much poverty should we tolerate in our country? These are not scientific questions, they all require a value judgment and there is no ultimate right answer. In an increasingly technical and scientific age, we need our democracy more than ever. Scientists, economists, and other professional experts are not elected and are not accountable to the public like an elected official. The real decision involves many competing issues on which scientists and other experts are just as dumb as the next guy. There is no “science machine” that can spit out the right course of action for our elected officials to take. The real strength of science is not certitude but doubt. With my data science team, I stress our role in government decision making with our team motto:

We draw conclusions from data, not recommendations.

By focusing on conclusions that the data can support, we help decision makers understand the likely consequences of alternative courses of action. We emphasize that for all its sophistication and mathematics, our input is a simplification of reality but with enough fidelity that we can help ring-fence the decision. We are under no illusion how difficult the real problem is, and we never put the decision maker to an ultimatum with a recommendation. We are not elected.

In digesting expert advice, I think Lord Salisbury’s insights from 1877 still apply:

No lesson seems to be so deeply inculcated by the experience of life as that you never should trust experts. If you believe the doctors, nothing is wholesome: if you believe the theologians, nothing is innocent: if you believe the soldiers, nothing is safe. They all require to have their strong wine diluted by a very large admixture of insipid common sense.

Covid-19 serology studies: a meta-analysis using hierarchical modelling

Serology studies are front-and-center in the news these days. Reports out of Santa Clara county, California, San Miguel County, Colorado, and Los Angeles suggest that a non-trivial fraction, more than 1%, of the population has SARS-CoV-2 antibodies in their bloodstream. European cities are following suit – they too are conducting serology studies and finding important fractions as well. The catch is that many of these studies find an antibody prevalence comparable to the false positive rate of their respective serology tests. The low statistical power associated with each study has invited criticism, in particular, that the results cannot be trusted and that the study authors should temper their conclusions.

But all is not lost. Jerome Levesque (also a federal data scientist and the manager of the PSPC data science team) and I performed a meta-analysis on the results from Santa Clara County (CA), Los Angeles County (CA), San Miguel County (CO), Chelsea (MA), Geneve (Switzerland), and Gangelt (Germany). We used hierarchical Bayesian modelling with Markov Chain Monte Carlo (MCMC), and also generalized linear mixed modelling (GLMM) with bootstrapping. By painstakingly sleuthing through pre-prints, local government websites, scientific briefs, and study spokesperson media interviews, we not only obtained the data from each study, but we also found information on the details of the serology test used in each study. In particular, we obtained data on each serology test’s false positive rate and false negative rate through manufacturer websites and other academic studies. We take the data at face value and we do not correct for any demographic bias that might exist in the studies.

Armed with this data, we build a generalized linear mixed model and a full Bayesian model with a set of hyper-priors. The GLMM does the usual shrinkage estimation across the study results, and across the serology test false positive/negative rates while the Bayesian model ultimately generates a multi-dimensional posterior distribution, including not only the false positive/negative rates but also the prevalence. We use Stan for the MCMC estimation. With the GLMM, we estimate the prevalence by bootstrapping with the shrunk estimators, including the false positive/negative rates. Both methods give similar results.

We find that there is evidence of high levels of antibody prevalence (greater than 1%) across all reported locations, but also that a significant probability mass exists for levels lower than the ones reported in the studies. As an important example, Los Angeles show a mode of approximately 4%, meaning that about 400,000 people in that city have SARS-CoV-2 antibodies. Given the importance of determining societal-wide exposure to SARS-CoV-2 for correct inferences of the infection fatality rate and for support to contact tracing, we feel that the recent serology studies contain an important and strongly suggestive signal.

Our inferred distributions for each location:

Prevalence density functions (marginal posterior distribution) from the Bayesian MCMC estimation.
Prevalence density functions from the GLMM bootstrap.
Prevalence with the false positive rate (Bayesian MCMC).
Prevalence with the false positive rate (GLMM bootstrap).

The biggest wealth transfer in history – from our children to us

As the Western world grapples with Covid-19 by trying to find the right balance between limiting human contacts while keeping our economies open to at least some degree, we are embarking on perhaps the biggest wealth transfer in human history. We are in the process of transferring a very large portion of the future consumption of our children to the present in the form of increased safety. Between creditor bailouts and new spending, it will be our children who will have to pay the bill in the form of higher taxes.

Someone has to pay!

In a usual situation we use debt to finance an asset that will generate an expected return. For example, a business like a restaurant might borrow to finance renovations or start-up costs and debt is paid back through business profits. Occasionally the restaurateur will fail and the loan might not get paid back in full, but that is why business loans don’t offer riskless interest rates. The higher interest rate is compensation for the possibility of failure. Government deficits operate in a similar fashion. The increased government debt is supposed to generate societal returns while recognizing the debt must be paid back through taxation. As Ricardian equivalence points out, there is no free lunch – society internalizes the government’s budget constraint. First order, the level of people’s consumption decisions do not depend on how government finances its spending, just on the spending itself. With increasing public debts, people anticipate the higher future taxes and change their consumption accordingly.

In the current situation debts public and private are not increasing asset performance, they are just keeping the lights on. There is no extra business profits or extra economic growth that we can expect from all this new debt to pay back the burden. This situation is the definition of a financial hole. Someone will have to cover that hole and that someone is our children.

The total cost of Covid-19 mitigation is not just the current direct costs, but also the lost future economic growth as our children pay taxes to cover the hole instead of using their wealth to make investments and generate innovation. And these costs are really beginning to pile up. I wonder what the total cost per life-year saved will turn out to be, because in the end, that is what our children are buying with all the debt we are creating. How much future consumption of our children and our children’s children is worth for the extra safety, almost exclusively for senior citizens, today? I don’t know, but I do know that our children and our children’s children don’t get a say.

Honestly, I find this all a little strange. We are waiting for a vaccine, but society went about its business long before Salk, and long before antibiotics. We built railways across the country and skyscrapers in our cities under what today would be considered prohibitively dangerous working conditions. You and I continue to benefit from that inheritance, but what will we bequeath to our children? Life was more hazardous in the past. I’m not suggesting that we return to 19th or early 20th century standards, but Covid-19 has made life only a little bit more dangerous again. Instead of living with and accepting some extra degree of danger, as previous generations did, apparently we are willing to risk destroying the opportunities of the generations coming up so that we can keep our safety as absolutely as high as possible. That trade-off is not a public health issue, it’s a moral one.

It’s a good thing that our ancestors didn’t shy away from risk; after we are done with Covid-19, maybe our children won’t either.

No better than a Fermi estimate?

Enrico Fermi, the great Italian-American physicist who contributed immensely to our understanding of nuclear processes and particle physics, was known for saying that any good physicist who knows anything about the scale of a problem should be able to estimate any result to within an half order of magnitude or better without doing a calculation. You only need to solve difficult equations when you want to do better than a factor of 2 or 3.

Enrico Fermi: How many piano tuners live in Chicago?

When I taught at Carleton University, I used to teach my students how to make Fermi estimates. I would ask them to estimate (without using Google!) the number of police officers in Ottawa, the number of marriages that took place in Ontario last summer, or the number of people who die in Canada every day. Fermi estimation isn’t magical, it’s just focused numeracy.

There is an article in the CBC this morning What national COVID-19 modelling can tell us — and what it can’t. Unfortunately, the author misses an opportunity to critically question the purpose of modelling and forecasting. The article contains a sub-title: “Uncertainty not a reason for doubt” (Really?!). On the numerical side, the article tells us that forecasts for Alberta predict between 400 and 3,100 Covid-19 deaths by the end of the summer, and that Quebec could see between 1,200 and 9,000 deaths by the end of April. Beyond the silliness of reporting two significant figures with such uncertainty, if that’s what the models are telling us, they don’t offer much because they are no better than a Fermi estimate. You can get these results by counting on your fingers, just like Enrico Fermi.

People want answers, I understand that. People don’t like not knowing things especially when they are frightened. But “models” that offer forecasts that are no better than Fermi estimates aren’t really models. There’s no need to solve differential equations when your model uncertainty exceeds the simple Fermi estimate. That doesn’t mean we shouldn’t work hard at building models, but it means that the Covid-19 prediction models need far better calibration from real world data before they can be useful in helping us understand the reality of future Covid-19 fatalities.

I will leave you with a wonderful story, told at the Federal Open Market Committee (a meeting at the Federal Reserve Bank) in September 2005 which highlights the absurdity that can result from forecasting behind a veil of ignorance:

During World War II, [Nobel laureate, Ken] Arrow was assigned to a team of statisticians to produce long-range weather forecasts. After a time, Arrow and his team determined that their forecasts were not much better than pulling predictions out of a hat. They wrote their superiors, asking to be relieved of the duty. They received the following reply, and I quote, “The Commanding General is well aware that the forecasts are no good. However, he needs them for planning purposes.”