Covid-19 and decisions under uncertainty

Three excellent essays recently appeared in the Boston Review by Jonathan Fuller, John Ioannidis, and Marc Lipsitch on the nature of epidemiology, and the use of data in making public health decisions. Each essay makes great points, especially Professor Ioannidis’ emphasis that Covid-19 public health decisions constitute trade-offs – other people will die based on our decisions to mitigate Covid-19. But I think all of all three essays miss the essential question, which transcends Covid-19 or even public health:

What is the optimal way to make irreversible decisions under uncertainty?

The answer to this question is subtle because it involves three competing elements: time, uncertainty, and irreversibility. In a decision making process, time gives us the opportunity to learn more about the problem and remove some of the uncertainty, but it’s irreversibility that makes the problem truly difficult. Most important problems tend to have a component of irreversibility. Once we make the decision, there is no going back or it is prohibitively expensive to do so, and our opportunity to learn more about the problem is over.

Irreversibility coupled with uncertainty and time means there is value in waiting. By acting too soon, we lose the opportunity to make a better decision, and by waiting too long, we miss the opportunity altogether. Waiting and learning incurs a cost, but that cost is often more than offset by the chance to make a better and more informed decision later. The value of waiting in finance is part of option pricing and that value radically changes optimal decision making. There is an enormous amount of research on option valuation with irreversible decisions. The book Investment Under Uncertainty by Avinash Dixit and Robert Pindyck provides a wonderful introduction to the literature. When faced with an irreversible decision, the option value can be huge, even dwarfing the payoff from immediate action. At times, learning is the most valuable thing we can do. But for the option to have value, we must have time to wait. In now-or-never situations, the option loses its value completely simply because we have no opportunity to learn. The take-a-way message is this: the more irreversible the decision, the more time you have, and the more uncertainty you face, the larger the option value. The option value increases along each of these dimensions, thereby increasing the value of waiting and learning.

Covid-19 has all of these ingredients – time, uncertainty, and irreversibility. Irreversibility appears through too many deaths if we wait too long, and economic destruction requiring decades of recovery if we are too aggressive in mitigation (while opening up global financial risks). There is a ton of uncertainty surrounding Covid-19 with varying degrees of limited time windows in which to act.

Those who call for immediate and strong Covid-19 mitigation strategies recognize irreversibility – we need to save lives while accepting large economic costs – and that even though we face enormous uncertainty, the costs incurred from waiting are so high compared to immediate action that the situation is ultimately now-or-never. There is no option value. Those who call for a more cautious and nuanced approach also see the irreversibility but feel that while the costs from learning are high and time is short, the option value is rescued by the enormous uncertainty. With high uncertainty, it can be worth a lot to learn even a little. Using the lens of option valuation, read these two articles by Professor Ioannidis and Professor Lipsitch from this March and you can see that the authors are actually arguing over the competing contributions of limited time and high uncertainty to an option’s value in an irreversible environment. They disagree on the value of information given the amount of time to act.

So who’s right? In a sense, both. We are not facing one uncertain irreversible decision; we face a sequence of them. When confronted by a new serious set of problems, like a pandemic, it can be sensible to down-weight the time you have and down-weight the uncertainty (by assuming the worst) at the first stage. Both effects drive the option value to zero – you put yourself in the now-or-never condition and you act. But for the next decision, and the next one after that, with decreasing uncertainty over time, you change course, and you use information differently by recognizing the chain of decisions to come. Johan Giesecke makes a compelling argument about the need for a course change with Covid-19 by thinking along these lines.

While option valuation can help us understand the ingredients that contribute to waiting, the uncertainty must be evaluated over some probability measure, and that measure determines how we weigh consequences. There is no objectively correct answer here. How do we evaluate the expected trade-off between excess Covid-19 deaths among the elderly vs a lifetime of lost opportunities for young people? How much extra child abuse is worth the benefit of lockdowns? That weighing of the complete set of consequences is part of the totality of evidence that Professor Ioannidis emphasizes in his essay.

Not only does time, uncertainty, and irreversibility drive the option value, but so does the probability measure. How we measure consequences is a value judgment, and in a democracy that mesaure must rest with our elected officials. It’s here that I fundamentally disagree with Professor Lipsitch. In his essay, he increasingly frames the philosophy of public health action in terms of purely scientific questions. But public action, the decision to make one kind of costly trade-offs against another – and it’s always about trade-offs – is a deeply political issue. In WWII, President Truman made the decision to drop the bomb, not his generals. Science can offer the likely consequences of alternative courses of public health actions, but it is largely silent on how society should weigh them. No expert, public health official, or famous epidemiologist has special insight into our collective value judgment.

I am NOT the decider: the limits to science in public policy and decision making

In the last decade, Western politicians and government officials have made evidence-based decision making a key plank in their platforms and operations. From climate change to Covid-19, governments around the world are increasingly leaning on scientists and other experts to help form policy. I welcome scientific input; without it we are blind. At the same time I fear that we sometimes expect too much from science. Science cannot answer moral questions, and it cannot determine our values.

Parliament: Our collective decision making home.

Today, within some circles of our chattering classes it’s in vogue to complain that our democracies are too slow, too ineffectual, and too unresponsive; that somehow how, an administrative state run by experts and only lightly guided by politicians will offer superior results. But for all its shortcomings and imperfections in process, accountability from the election booth provides the best mechanism to ensure that our collective decision making lines up with our collective values. We invest the power of decision making in our elected officials for a reason – we demand that our leaders take responsibility and we then we make them accountable.

Science can never replace public decision making. How many of our civil liberties should we suspend to fight Covid-19? How much global warming is worth extra economic growth? How much poverty should we tolerate in our country? These are not scientific questions, they all require a value judgment and there is no ultimate right answer. In an increasingly technical and scientific age, we need our democracy more than ever. Scientists, economists, and other professional experts are not elected and are not accountable to the public like an elected official. The real decision involves many competing issues on which scientists and other experts are just as dumb as the next guy. There is no “science machine” that can spit out the right course of action for our elected officials to take. The real strength of science is not certitude but doubt. With my data science team, I stress our role in government decision making with our team motto:

We draw conclusions from data, not recommendations.

By focusing on conclusions that the data can support, we help decision makers understand the likely consequences of alternative courses of action. We emphasize that for all its sophistication and mathematics, our input is a simplification of reality but with enough fidelity that we can help ring-fence the decision. We are under no illusion how difficult the real problem is, and we never put the decision maker to an ultimatum with a recommendation. We are not elected.

In digesting expert advice, I think Lord Salisbury’s insights from 1877 still apply:

No lesson seems to be so deeply inculcated by the experience of life as that you never should trust experts. If you believe the doctors, nothing is wholesome: if you believe the theologians, nothing is innocent: if you believe the soldiers, nothing is safe. They all require to have their strong wine diluted by a very large admixture of insipid common sense.

Covid-19 serology studies: a meta-analysis using hierarchical modelling

Serology studies are front-and-center in the news these days. Reports out of Santa Clara county, California, San Miguel County, Colorado, and Los Angeles suggest that a non-trivial fraction, more than 1%, of the population has SARS-CoV-2 antibodies in their bloodstream. European cities are following suit – they too are conducting serology studies and finding important fractions as well. The catch is that many of these studies find an antibody prevalence comparable to the false positive rate of their respective serology tests. The low statistical power associated with each study has invited criticism, in particular, that the results cannot be trusted and that the study authors should temper their conclusions.

But all is not lost. Jerome Levesque (also a federal data scientist and the manager of the PSPC data science team) and I performed a meta-analysis on the results from Santa Clara County (CA), Los Angeles County (CA), San Miguel County (CO), Chelsea (MA), Geneve (Switzerland), and Gangelt (Germany). We used hierarchical Bayesian modelling with Markov Chain Monte Carlo (MCMC), and also generalized linear mixed modelling (GLMM) with bootstrapping. By painstakingly sleuthing through pre-prints, local government websites, scientific briefs, and study spokesperson media interviews, we not only obtained the data from each study, but we also found information on the details of the serology test used in each study. In particular, we obtained data on each serology test’s false positive rate and false negative rate through manufacturer websites and other academic studies. We take the data at face value and we do not correct for any demographic bias that might exist in the studies.

Armed with this data, we build a generalized linear mixed model and a full Bayesian model with a set of hyper-priors. The GLMM does the usual shrinkage estimation across the study results, and across the serology test false positive/negative rates while the Bayesian model ultimately generates a multi-dimensional posterior distribution, including not only the false positive/negative rates but also the prevalence. We use Stan for the MCMC estimation. With the GLMM, we estimate the prevalence by bootstrapping with the shrunk estimators, including the false positive/negative rates. Both methods give similar results.

We find that there is evidence of high levels of antibody prevalence (greater than 1%) across all reported locations, but also that a significant probability mass exists for levels lower than the ones reported in the studies. As an important example, Los Angeles show a mode of approximately 4%, meaning that about 400,000 people in that city have SARS-CoV-2 antibodies. Given the importance of determining societal-wide exposure to SARS-CoV-2 for correct inferences of the infection fatality rate and for support to contact tracing, we feel that the recent serology studies contain an important and strongly suggestive signal.

Our inferred distributions for each location:

Prevalence density functions (marginal posterior distribution) from the Bayesian MCMC estimation.
Prevalence density functions from the GLMM bootstrap.
Prevalence with the false positive rate (Bayesian MCMC).
Prevalence with the false positive rate (GLMM bootstrap).

The biggest wealth transfer in history – from our children to us

As the Western world grapples with Covid-19 by trying to find the right balance between limiting human contacts while keeping our economies open to at least some degree, we are embarking on perhaps the biggest wealth transfer in human history. We are in the process of transferring a very large portion of the future consumption of our children to the present in the form of increased safety. Between creditor bailouts and new spending, it will be our children who will have to pay the bill in the form of higher taxes.

Someone has to pay!

In a usual situation we use debt to finance an asset that will generate an expected return. For example, a business like a restaurant might borrow to finance renovations or start-up costs and debt is paid back through business profits. Occasionally the restaurateur will fail and the loan might not get paid back in full, but that is why business loans don’t offer riskless interest rates. The higher interest rate is compensation for the possibility of failure. Government deficits operate in a similar fashion. The increased government debt is supposed to generate societal returns while recognizing the debt must be paid back through taxation. As Ricardian equivalence points out, there is no free lunch – society internalizes the government’s budget constraint. First order, the level of people’s consumption decisions do not depend on how government finances its spending, just on the spending itself. With increasing public debts, people anticipate the higher future taxes and change their consumption accordingly.

In the current situation debts public and private are not increasing asset performance, they are just keeping the lights on. There is no extra business profits or extra economic growth that we can expect from all this new debt to pay back the burden. This situation is the definition of a financial hole. Someone will have to cover that hole and that someone is our children.

The total cost of Covid-19 mitigation is not just the current direct costs, but also the lost future economic growth as our children pay taxes to cover the hole instead of using their wealth to make investments and generate innovation. And these costs are really beginning to pile up. I wonder what the total cost per life-year saved will turn out to be, because in the end, that is what our children are buying with all the debt we are creating. How much future consumption of our children and our children’s children is worth for the extra safety, almost exclusively for senior citizens, today? I don’t know, but I do know that our children and our children’s children don’t get a say.

Honestly, I find this all a little strange. We are waiting for a vaccine, but society went about its business long before Salk, and long before antibiotics. We built railways across the country and skyscrapers in our cities under what today would be considered prohibitively dangerous working conditions. You and I continue to benefit from that inheritance, but what will we bequeath to our children? Life was more hazardous in the past. I’m not suggesting that we return to 19th or early 20th century standards, but Covid-19 has made life only a little bit more dangerous again. Instead of living with and accepting some extra degree of danger, as previous generations did, apparently we are willing to risk destroying the opportunities of the generations coming up so that we can keep our safety as absolutely as high as possible. That trade-off is not a public health issue, it’s a moral one.

It’s a good thing that our ancestors didn’t shy away from risk; after we are done with Covid-19, maybe our children won’t either.

No better than a Fermi estimate?

Enrico Fermi, the great Italian-American physicist who contributed immensely to our understanding of nuclear processes and particle physics, was known for saying that any good physicist who knows anything about the scale of a problem should be able to estimate any result to within an half order of magnitude or better without doing a calculation. You only need to solve difficult equations when you want to do better than a factor of 2 or 3.

Enrico Fermi: How many piano tuners live in Chicago?

When I taught at Carleton University, I used to teach my students how to make Fermi estimates. I would ask them to estimate (without using Google!) the number of police officers in Ottawa, the number of marriages that took place in Ontario last summer, or the number of people who die in Canada every day. Fermi estimation isn’t magical, it’s just focused numeracy.

There is an article in the CBC this morning What national COVID-19 modelling can tell us — and what it can’t. Unfortunately, the author misses an opportunity to critically question the purpose of modelling and forecasting. The article contains a sub-title: “Uncertainty not a reason for doubt” (Really?!). On the numerical side, the article tells us that forecasts for Alberta predict between 400 and 3,100 Covid-19 deaths by the end of the summer, and that Quebec could see between 1,200 and 9,000 deaths by the end of April. Beyond the silliness of reporting two significant figures with such uncertainty, if that’s what the models are telling us, they don’t offer much because they are no better than a Fermi estimate. You can get these results by counting on your fingers, just like Enrico Fermi.

People want answers, I understand that. People don’t like not knowing things especially when they are frightened. But “models” that offer forecasts that are no better than Fermi estimates aren’t really models. There’s no need to solve differential equations when your model uncertainty exceeds the simple Fermi estimate. That doesn’t mean we shouldn’t work hard at building models, but it means that the Covid-19 prediction models need far better calibration from real world data before they can be useful in helping us understand the reality of future Covid-19 fatalities.

I will leave you with a wonderful story, told at the Federal Open Market Committee (a meeting at the Federal Reserve Bank) in September 2005 which highlights the absurdity that can result from forecasting behind a veil of ignorance:

During World War II, [Nobel laureate, Ken] Arrow was assigned to a team of statisticians to produce long-range weather forecasts. After a time, Arrow and his team determined that their forecasts were not much better than pulling predictions out of a hat. They wrote their superiors, asking to be relieved of the duty. They received the following reply, and I quote, “The Commanding General is well aware that the forecasts are no good. However, he needs them for planning purposes.”

Covid-19: A case fatality rate app for US counties

I made a web app on estimating the case fatality rate (CFR) of Covid-19 across all the US counties. I use a binomial GLMM with nested random effects (state, state:county) using the R package lme4. Every time you reload the app, it fetches the most recent data and re-estimates the model.

The model “shrinks” the simple CFR estimates (dividing deaths by cases) at the county level by “learning” across the other counties within the state and by “learning” across the states themselves. The model pulls in or pushes out estimates that are too large or too small because they come from a county with a small sample size. It’s a bit like trying to estimate the true scoring rate of the NHL teams after watching only the first 10 games of the season. There will be a couple of blow-outs and shut-outs and we need to correct for those atypical results in small samples – but we should probably shrink the Leafs ability to score down to zero just to be safe 😉

The CFR data has limitations because the people who get tested are biased toward being sick, often very sick. The infection fatality rate (IFR), which is what we all really want to know, requires testing far more people. Recent evidence suggests that the IFR will end up much lower than the current CFR estimates.

The app shows the how the naive empirical estimate of the CFR compares to the shrunk estimator from the model. I also provide a forest plot to show the prediction intervals of the model estimates, including the contributions from the random effects. The prediction intervals I report are conservative. I use R’s merTools predictInterval() to include uncertainty from the residual (observation-level) variance, and the uncertainty in the grouping factors by drawing values from the conditional modes of the random effects using the conditional variance-covariance matrix. I partially corrected for the correlation between the fixed and random effect. Prediction interval estimation with mixed models is a thorny subject and short of a full Bayesian implementation, a full bootstrap of the lme4 model is required for the best estimates of the prediction interval. Unfortunately, bootstrapping my model takes too long for the purposes of my app (and so does the MCMC in a Bayesian implementation!). For details on use of the use of merTools::predictInterval(), see Prediction Intervals from merMod Objects by Jared Knowles and Carl Frederick.

Hopefully Covid-19 will pass soon. Stay safe.

Estimating the Covid-19 case fatality rate using GLMM

As we are all dealing with self-isolation and social distancing in our fight against Covid-19, I thought that I would apply some quick-and-dirty mixed effects modelling with the Covid-19 case fatality rate (CFR) data.

The Centre for Evidence Based Medicine (CEBM), Nuffield Department of Primary Care Health Centre, University of Oxford (not far from my old stomping grounds on Keble Road) has put together a website that tracks the Covid-19 CFR around the world. They build an estimator using a fixed-effect inverse-variance weighting scheme, a popular method in meta-analysis, reporting the CFR by region as a percentage along with 95% confidence intervals in a forest plot. The overall estimate is suppressed due to heterogeneity. In their analysis, they drop countries with fewer than four deaths recorded to date.

I would like to take a different approach with this data by using a binomial generalized mixed model (GLMM). My model has similar features to CEBM’s approach, but I do not drop any data – regions which have observed cases but no deaths is informative and I wish to use all the data in the CFR estimation. Like CEBM, I scrape https://www.worldometers.info/coronavirus/ for the Covid-19 case data.

In one of my previous blog posts I discuss the details of GLMM. GLMM is a partial-pooling method with group data which avoids the two extreme modelling approaches of pooling all the data together for a single regression or running separate regressions for each group. Mixed modelling shares information between groups, tempering extremes and lifting those groups which have little data. I use the R package lme4, but this work can be equally done in a full Bayesian setup with Markov Chain Monte Carlo. You can see a nice illustration of “shrinkage” in mixed effects models at this site.

In my Covid-19 CFR analysis, the design matrix, X, consists of only an intercept term, the random effects, b, have an entry for each region, and the link, g(), is the logit function. My observations consist of one row per case in each region with 1 indicating the observation of death, otherwise 0. For example, at the time of writing this post, Canada has recorded 7,474 cases with 92 deaths and so my resulting observation table for Canada consists of 7,474 rows with 92 ones and 7,382 zeros. The global dataset expands to nearly 800,000 entries (one for each case). If a country or region has not recorded a death, the table for that region consists of only zeros. The GLMM captures heterogeneity through the random effects and there is no need to remove zero or low count data. The GLMM “shrinks” estimates via partial-pooling.

Below are my results. First, we see that the fixed effect gives a global CFR of (1.4% to 1.7%) 95% CI. This CFR is the base rate that every region sees but with its random effect that sits on top. The random effect has expectation zero, so the base CFR is the value we would use for a new region that we have not seen before and has no data yet (zero cases). Notice that, we will have a non-zero CFR for regions that have yet to observe a death over many cases – the result of partial pooling.

In the second graph we see the effect of “shrinkage” on the data. The central value for the CFR from separate regressions for each region is the x-axis (labelled empirical) while the predicted value from the GLMM is on the y-axis. Estimates that sit on the 45-degree share the same value (which we expect for regions with lots of data, less “shrinkage”). We see that regions with little data – including the group on the far left – are lifted.

I like the GLMM approach to the Covid-19 CFR data because there is so much heterogeneity between regions. I don’t know the source of that heterogeneity. Demographics is one possible explanation, but I would be interested in learning more about how each country records Covid-19 cases. Different data collection standards can also be a source of large heterogeneity.

I welcome any critiques or questions on this post. I will be making a Shiny app that updates this analysis daily and I will provide my source code.

CFR by region from a binomial GLMM. Covid-19 data taken on March 30, 2020
Regression by region vs partial-pooling. Covid-19 data taken on March 30, 2020

Choosing Charybdis

The West needs immediate plans to restart their economies in the most virus safe way possible. If we don’t begin restarting our economies soon, the West will have chosen Charybdis over Scylla. It’s no longer hypothetical. In the United States alone, the response is costing nearly $2 trillion per month. To put that in perspective, the annual output of the entire US economy before the Covid-19 pandemic was $22 trillion. The economic contraction that we already face rivals the largest year-over-year falls in production during the Great Depression. We risk not having an economy left to restart. The next phase could be a sovereign debt run across the globe – the bond markets are already beginning to signal trouble.

South Koreans are winning the war against Covid-19 by testing as many people as possible, isolating the the infected – including asymptomatic carriers – and employing aggressive triage policies. Let’s learn from each other, and slowly and safely reopen our economies while employing best social distancing practices. If the world can’t get back to some kind of a functioning economy soon, the law of unintended consequences may come into sharp focus. And what can emerge from those unintended consequences truly frightens me. In the 20th century, the most tyrannical ideologies grew out of instability and hardship. No society is immune to those forces.

Covid-19: Between Scylla and Charybdis, only difficult choices. (Alessandro Allori)

We are in completely uncharted territory. Never in history have we tried to shutdown our economies for an indefinite period of time. There is no experience to guide us here; no one knows what awaits us beneath the whirlpool. In addition to our quarantine efforts, we also need to seriously start thinking about the statistical value of life-years-remaining as the beginning of some kind of cost-benefit analysis.

People are comparing our current situation to WWII. I think that comparison is apt, but in a way that most people don’t intend.

In 1939 (1941 for our American cousins) we went to war against the Axis powers to protect our way of life, our prosperity, and to build a world in which liberty could grow. If we had let Germany succeed in Europe by surrendering at Dunkirk, we would have survived and with few Allied causalities. There would be no Allied military cemeteries in Normandy today or elsewhere in France and Europe. British civilians would have been spared the Blitz. But we would have inherited a world with little opportunity, little prosperity, and a hopeless future for our children. Instead, Canada sacrificed the lives of 42,000 young men – all in the prime of life – with another 55,000 wounded. Our young country of a 11 million people put 10% of its citizens directly in harm’s way so that you and I could enjoy a world full of potential, growth, freedom, and peace. The Allies together lost millions. We marched straight forward with resolve and determination and we refused to be swallowed. In the coming weeks, even while employing our best containment efforts, the West may be once again put in the most awful of positions: We may need to ask the literal sons and daughters of the generation that ensured our freedom 80 years ago for a similar sacrifice, this time by accepting only a slightly higher level of risk, and stand in harm’s way to protect us from what lies beneath.

In 1939, we chose Scylla and we won.

Covid-19: Between Scylla and Charybdis. A word of caution from Professor John Ioannidis

Like Odysseus, the Western world finds itself caught between Scylla and Charybdis. We have embarked on a policy path to combat the Covid-19 pandemic that has no precedent in our collective history. The eurozone is looking at a 24% economic contraction in the second quarter on an annualized basis. With numbers that large, I can’t help but think that all kinds of geopolitical risks lurk around the corner. (In the lead up to WWI, nearly all intellectuals and leaders of the European powers believed any conflict would last a mere matter of weeks and at most only a few months. They badly miscalculated.)

Odysseus facing the choice between Scylla and Charybdis, Henry Fuseli.

In Italy, limited capacity is forcing physicians and medical staff into difficult moral choices. We may reach another moral choice in the very near future – placing a hard upper bound on the “value of a statistical life”, corrected for remaining years of life expectancy. How much are we willing to throttle our economy to save some lives with policies that will eventually cost other lives down the road? There are no easy answers here, only trade-offs.

But before we can do any trade-off analysis, we need good data. John Ioannidis, professor of Medicine, of Health Research and Policy and of Biomedical Data Science, at Stanford University School of Medicine, and a professor of Statistics at Stanford University School of Humanities and Sciences, has a new article in STAT: A fiasco in the making? As the coronavirus pandemic takes hold, we are making decisions without reliable data. Professor Ioannidis is an expert in statistics, data science, and meta-analysis (combining data and results from multiple studies on the same research question). He is also the author of the celebrated paper “Why Most Published Research Findings Are False” in PLOS Medicine. In A fiasco in the making?, professor Ioannidis asks,

“Draconian countermeasures have been adopted in many countries. If the pandemic dissipates — either on its own or because of these measures — short-term extreme social distancing and lockdowns may be bearable. How long, though, should measures like these be continued if the pandemic churns across the globe unabated? How can policymakers tell if they are doing more good than harm?”

He also points out that we truly don’t understand the current infection level,

“…we lack reliable evidence on how many people have been infected with SARS-CoV-2 (Covid-19) or who continue to become infected. Better information is needed to guide decisions and actions of monumental significance and to monitor their impact…The data collected so far on how many people are infected and how the epidemic is evolving are utterly unreliable. Given the limited testing to date, some deaths and probably the vast majority of infections due to SARS-CoV-2 are being missed. We don’t know if we are failing to capture infections by a factor of three or 300…The most valuable piece of information for answering those questions would be to know the current prevalence of the infection in a random sample of a population and to repeat this exercise at regular time intervals to estimate the incidence of new infections. Sadly, that’s information we don’t have.”

In the article, he details the analysis of the natural experiment offered by the quarantined passengers on the Diamond Princess cruise ship and what it could mean for bounding the case fatality ratio of SARS-CoV-2. He ends the article on a cautionary note about the importance of weighing consequences against expected results:

“…with lockdowns of months, if not years, life largely stops, short-term and long-term consequences are entirely unknown, and billions, not just millions, of lives may be eventually at stake. If we decide to jump off the cliff, we need some data to inform us about the rationale of such an action and the chances of landing somewhere safe.”

I encourage you to read professor Ioannidis’ article. We are stuck between Scylla and Charybdis, but we can make better decisions with better data. Our choices over the next couple of weeks may incalculably change the course of human history forever.

UPDATE March 20, 2020

A commenter, Gittins Index (thanks!) has found a freely accessible copy of W. Kip Viscusi’s classic paper on the value of statistical life: “The Value of Risks to Life and Health”, Journal of Economic Literature Vol. XXXI (December 1993), pp. 1912-1946 .