subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now

A major effort is under way to put a resolution before the next World Health Organisation (WHO) assembly, scheduled for May 27-June 1 2024, to advance a strategic dialogue about establishing globally applicable norms, standards and protocols for biosafety, biosecurity and biosurveillance in the age of genetic engineering.

This week in Cape Town the annual conference of the African Society for Laboratory Medicine (ASLM) is meeting to discuss the challenge of advancing scientific work on disease-causing pathogens while ensuring this is done safely and securely in properly equipped laboratories. There is considerable momentum in the post-Covid world to put guardrails in place to ensure genetic engineering stays in its lane while not stifling the great benefits of science applications to advance health.

Beneficial genetic engineering has been a long time coming. A grain of golden rice, one of the most famous examples of genetic modification, can provide as much as 23 times the amount of beta carotene (a precursor to vitamin A) than a standard rice variety. Insufficient intake of beta carotene suppresses the immune system — a potentially deadly deficiency endemic to developing countries.

While nascent technologies such as genetic modification, artificial intelligence (AI) and gene editing have the potential to accelerate the treatment and prevention of disease, they could likewise be harnessed to generate easily transmissible and potentially deadly novel pathogens. Several country-level analyses of preparedness for disease outbreaks, such as the WHO’s joint external evaluations and the Global Health Security Index, have demonstrated a global deficit of preparedness for future disease outbreaks, with weaknesses most apparent in developing countries.

Strengthening disease surveillance and response capabilities in individual countries is more than a matter of national interest. It is an essential step towards strengthening global health security in the face of any disease outbreak whether accidental, deliberate or natural in origin. 

As we have witnessed in recent years, the emergence of novel diseases poses a significant threat to global health stability. The Covid-19 pandemic is a stark reminder of the importance of fortifying health systems worldwide. Developing nations, often burdened by limited resources and inadequate healthcare infrastructure, are particularly vulnerable to the potentially catastrophic consequences of such events. While much attention has been focused on larger nations, giving equitable attention to developing regions is crucial for building a resilient global health architecture.

The same technologies that hold promise for medical breakthroughs can also be misused for deliberately harmful or political purposes. This dual-use potential of biotechnology poses a challenge in balancing scientific progress with security concerns. As genome mapping and sequencing becomes more accessible and widely shared, so does the potential for misuse increase.

The notion of genetically engineering a deadly virus in a laboratory, often associated with concerns about bioterrorism or accidental release, raises complex ethical, scientific and diplomatic considerations. With rapid advancements in biotechnology, especially in mRNA capabilities and gene editing techniques like Crispr, scientists have gained unprecedented precision and ease in manipulating genetic material — including modifying the genetic code of pathogens such as bacteria and viruses.

No longer a theoretical event, scientists have already harnessed genetic engineering techniques to induce mutations in viruses that can make them more or less infectious, a technique referred to as “gain of function”.

The possibility of using AI to develop bioweapons raises additional concerns, and remains uncharted territory. While the intersection of AI and biotechnology holds immense potential for positive applications in healthcare, research and diagnostics, it also poses risks if misused. AI algorithms could be employed to analyse vast genetic data sets and identify specific sequences for manipulation. This could accelerate the process of genetic engineering, allowing for the creation of more efficient and potentially harmful pathogens.

Advanced AI systems could also theoretically be used to design pathogens with specific characteristics, such as increased virulence or resistance to existing treatments. This targeted design could make bioweapons more lethal and difficult to counteract.

To safeguard against such threats, multilateral and public-private sector agreements and regulations to govern the ethical use of AI in science, emphasising the prohibition of bioweapon development, should be established, with strong oversight committees responsible for assessing the ethical implications at the intersection of AI and biotechnology. These committees should include experts in AI, virology, bioethics and global health security.

A culture of transparency and open research must be fostered, allowing the international scientific community and regulatory bodies to monitor and assess potentially risky developments. Researchers should be encouraged to publish their findings while considering security concerns. A collaborative approach can enhance the collective ability to identify and mitigate potential risks.

In addition to oversight at the level of research & development, adherence to international norms plays a crucial role in preparing for and preventing accidental or deliberate disease outbreaks. Governments and international governing bodies must implement strict regulations and oversight for research involving potentially dangerous pathogens. At the same time, they must work to improve the safety and security of laboratories conducting research with high-risk pathogens.

The Biological Weapons Convention (BWC) serves as an international standard preventing the proliferation and deployment of harmful biological agents. However, it does not include clear guidelines on the use of AI for the development of bioweapons. Formed after 9/11, the Global Partnership Against the Spread of Weapons & Materials of Mass Destruction, an international initiative led by the Group of Seven (G7) to prevent the proliferation of chemical, biological, radiological and nuclear weapons, has bolstered BWC compliance efforts.

With the Africa Centres for Disease Control and Prevention (CDC),  the Global Partnership also formed the Signature Initiative to Mitigate Biological Risks in Africa, including hazards associated with new biotechnologies. A priority is to develop and maintain sustainable biosecurity, biosafety and laboratory capacity in Africa including through:

  • Establishing regional specialised training facilities for engineering, maintenance and calibration of laboratory equipment and biological laboratories;
  • Applying innovative new solutions to improve the sustainability of infectious disease diagnostic laboratories, particularly in low-resource settings;
  • Supporting the Africa CDC to maintain a strategic focus on biosafety and biosecurity and appropriate capacity-building in AU member states, in alignment with its five-year strategic plan for biosafety and biosecurity; and
  • Supporting regional professional training and certification programmes, in response to workforce development and surge capacity needs.

As has been seen with the Covid pandemic, infectious pathogens do not respect borders, and local outbreaks in any country have the potential to spread internationally in a matter of days. The rapid advancement of biotechnology and AI has significantly changed the health security landscape, making the accidental or deliberate release of engineered pathogens a concerning possibility.

The international health security network is only as strong as its weakest point. Developing countries are particularly vulnerable to novel disease outbreaks of any origin, presenting an opportunity for targeted investment in health security infrastructure. By enhancing developing countries’ disease surveillance and diagnostic capabilities and strengthening their healthcare delivery systems, we can create a robust line of defence against potential outbreaks.

• Dr James is a professor of practice at the School of Public Health, and senior adviser to the Pandemic Centre, at Brown University, Rhode Island. 

subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.