A temporary field hospital for Covid-19 patients in Khayelitsha is shown in this July 21 2020 file photo. Picture: REUTERS/MIKE HITCHINGS
A temporary field hospital for Covid-19 patients in Khayelitsha is shown in this July 21 2020 file photo. Picture: REUTERS/MIKE HITCHINGS

Graham Barr reminded us in a recent article in this publication of some of the challenges we have encountered while estimating the possible number of SA Covid-19 cases and associated healthcare resource use as part of the SA Covid-19 Modelling Consortium (“Government shunned proper statistical tools to tackle pandemic,” August 23). Among these are the choice of the modelling methods, understating the uncertainty surrounding one's estimates, and assessing the potential effect of regulations.

It is unclear who exactly, or what model, the article was aimed at. One focus was the question of whether the government decision regarding the alcohol ban was based on correct analysis. We are not in a position to comment on this as we had no part in this analysis. But there are a number of other contentions in the article that seem to take aim at what was in fact our work.

Barr states that “the government focused on models that gave uncertain and alarmist projections of loss of life”. Our projections of the potential number of cumulative deaths have always been accompanied by ranges representing the uncertainty associated with them; whether the reporting on them was alarmist or not is outside of our control. He also states that the models “attached too little weight to the certain loss of jobs, tax revenue and government services”.

Our model has never attempted to attach any weight to these aspects, though others’ have done so. Our model steered clear of incorporating these aspects not because we do not believe they are important — quite the opposite — but because they lie outside the scope of work the consortium was formed to address. However, from the beginning we worked with other groups that tackled the macroeconomic effects of both the pandemic and potential measures for its control, sharing model outputs and updates.

Barr also contends that “statisticians must always give model estimates that include the uncertainty of their estimates. If the uncertainty associated with model estimates is not made explicit, the government and the public may be given the impression that the model estimates have a weight of scientific knowledge behind them that they do not have.” We could not agree more.

In all our outputs, be they part of public reports, results shared with government or media engagements, we have made it clear that our projections are subject to great uncertainty, largely owing to, as Barr rightly states, the global lack of historical data regarding the course of the disease, in particular in the early stages of the pandemic, as well as the incompleteness of local data regarding the number of cases, hospital admissions and deaths. We have updated our projections as reliable new data and evidence became available. Lastly, as stated, we have presented all our results together with their respective uncertainty ranges, to make it graphically clear what the limits of our knowledge are.

Barr finally asserts that “many well-formulated tools exist for tackling complex problems […] that include all role players.” He mentions, somewhat overlapping, the fields of operational research, cost-benefit analysis and multi-criteria decision analysis. Operational research is a scientific approach to solve problems that are part of complex systems using a range of both quantitative and qualitative methods, including simulation, optimisation, decision analysis and problem structuring.

Both epidemiological modelling (our work) and multi-criteria decision analysis (MCDA) are well-established tools within operational research. Regarding cost-benefit analysis, a method of economic evaluation in which the cost of an activity is compared with the monetary value of its consequences, we agree that this is theoretically a fine method to use in guiding government decisions — or would be, if data on all costs and all consequences, or at least most of them, was indeed available.

Some of us are veterans of applying cost-benefit analyses to assess healthcare interventions. From this we know that only very rarely, and only in very circumscript situations, is there enough data available on all relevant costs and consequences to do the method justice; in all other situations, the number of assumptions that have to be made based on very scant data render the result useless.

In other words, what is the meaning of a finding that the cost of the lockdown outweighs that of an uncontrolled pandemic by a factor of x if the effect of the lockdown on the economy is unclear; the duration of this effect has to be fully assumed; and the effect of an uncontrolled epidemic on the economy is also unknown?

With regard to multi-criteria decision analysis, this again is a good tool for informing policy decisions (and one that one of us has successfully used in other situations, mostly disease areas where data had been collected over decades). Appropriate use of MCDA requires giving stakeholders plenty of opportunity to acquire enough of an opinion to be able to first define and weigh up different decision criteria, and then score how well a number of alternative actions would perform against them.

Given the lack of data both with regard to the course of the epidemic and to the effect of most available interventions against it, this method could possibly have served as a tool to facilitate and integrate learning and understanding of the problem being faced, rather than suggesting objective decisions to pursue. Optimal decisions are nevertheless not the goal of MCDA, which seeks rather to integrate objective estimation with subjective value measurement.  

Being only eight months old, Covid-19 is a new infectious disease, and there are still many elements that are unknown. Thus, epidemiological models have been developed using data that is subject to a high degree of uncertainty. Similarly, other models that have been developed to assess the economic, financial, social and clinical aspects of the epidemic are also subject to this uncertainty. All models are simplifications of reality that are designed to describe and predict system behaviour and are justified by the assumptions and data with which they are developed.

And this data is scarce. Barr says “ ... scientific modelling and forecasting of Covid-related phenomena are challenging at best, and even with good data there is considerable uncertainty associated with such contentions.” Epidemiological modelling is not immune to this, and neither are cost-benefit analysis and MCDA. There is no one scientific or other method that is complete in its assessment of Covid-19 and its effect on the whole system.

At times like this, multidisciplinary research is necessary and there are many groups comprising epidemiologists, economists and mathematical disease modellers such as ourselves, and financial, clinical and social experts working towards supporting the Covid-19 response in SA. Barr suggests tools he feels can complement those in use, and we encourage him to lend his support and expertise.

For our part, our epidemiological models have been refined regularly and will continue to be refined as new global research and local data emerges.

• Silal is director of the Modelling and Simulation Hub, Africa at UCT; Meyer-Rath is an associate professor at Boston University’s School of Public Health and works at the Health Economics and Epidemiological Research Office at Wits; Pulliam is director of the DSI-NRF SA Centre for Epidemiological Modelling and Analysis at the University of Stellenbosch; and Moultrie is a medical doctor and epidemiologist at the NICD’s Centre for Tuberculosis.   

Would you like to comment on this article or view other readers' comments?
Register (it’s quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.