Enrico Fermi, the great Italian-American physicist who contributed immensely to our understanding of nuclear processes and particle physics, was known for saying that any good physicist who knows anything about the scale of a problem should be able to estimate any result to within an half order of magnitude or better without doing a calculation. You only need to solve difficult equations when you want to do better than a factor of 2 or 3.
When I taught at Carleton University, I used to teach my students how to make Fermi estimates. I would ask them to estimate (without using Google!) the number of police officers in Ottawa, the number of marriages that took place in Ontario last summer, or the number of people who die in Canada every day. Fermi estimation isn’t magical, it’s just focused numeracy.
There is an article in the CBC this morning What national COVID-19 modelling can tell us — and what it can’t. Unfortunately, the author misses an opportunity to critically question the purpose of modelling and forecasting. The article contains a sub-title: “Uncertainty not a reason for doubt” (Really?!). On the numerical side, the article tells us that forecasts for Alberta predict between 400 and 3,100 Covid-19 deaths by the end of the summer, and that Quebec could see between 1,200 and 9,000 deaths by the end of April. Beyond the silliness of reporting two significant figures with such uncertainty, if that’s what the models are telling us, they don’t offer much because they are no better than a Fermi estimate. You can get these results by counting on your fingers, just like Enrico Fermi.
People want answers, I understand that. People don’t like not knowing things especially when they are frightened. But “models” that offer forecasts that are no better than Fermi estimates aren’t really models. There’s no need to solve differential equations when your model uncertainty exceeds the simple Fermi estimate. That doesn’t mean we shouldn’t work hard at building models, but it means that the Covid-19 prediction models need far better calibration from real world data before they can be useful in helping us understand the reality of future Covid-19 fatalities.
I will leave you with a wonderful story, told at the Federal Open Market Committee (a meeting at the Federal Reserve Bank) in September 2005 which highlights the absurdity that can result from forecasting behind a veil of ignorance:
During World War II, [Nobel laureate, Ken] Arrow was assigned to a team of statisticians to produce long-range weather forecasts. After a time, Arrow and his team determined that their forecasts were not much better than pulling predictions out of a hat. They wrote their superiors, asking to be relieved of the duty. They received the following reply, and I quote, “The Commanding General is well aware that the forecasts are no good. However, he needs them for planning purposes.”