Gain of Function reasearch: pandemic risk or scientific progress
Gain of function research - good or not?
Wuhan lab leak hypothesis triggered a larger discussion about virology research. One of the main topics is the risk - benefit analysis of gain of function (GOF) research. The Wikipedia page gives us some details, but for a more complete picture, there’s a paper published in 2014 in Nature. It's important that it's 5 years prior to COVID. 5 researchers share their views on the topic, a summary of the points raised in that paper is what you’re about to read. Summarized from over 6k words, so about 4 fold.
The format is a question ( 5 in total), followed by answers from each of the researchers. Each section has the initials of the researcher(s) at the end. Text has ‘>’ elements, which indicate direct quotes, other text is my summary. Comments are preceded by the word ‘comment’.
Tl;dr:
The question is not whether to do any gain of function research, but whether to include risky pandemic-potential viral elements.
The list of researchers, along with links to their twitter:
W. Paul Duprex - https://twitter.com/10queues
Ron A. M. Fouchier - twitter account not found
Michael J. Imperiale - https://twitter.com/mimperia
Marc Lipsitch - https://twitter.com/mlipsitch
David A. Relman - https://twitter.com/DavidRelman
Why do we do Gain-of-Function research at all?
GOF is not just for avian viruses.
> Phenotypes include resistance to a drug, alteration of host range, enhanced stability and replication, and not only transmission.
> mechanistic studies [...] facilitate the study of host–pathogen interactions. Virologists will be deprived of a powerful tool of human inquiry if they are unable to perform adaptation experiments. Second, it is critical to realize that the benefits of basic research are often unanticipated and accrue over time.
> identification of mutations that increase virus replication (which is applicable to vaccine production) and changes that enhance stability of receptor-binding proteins (which is useful for surveillance).
Additionally, insertion of new genes resulting in new function proves causality more rigorously than loss of genes into loss of function.
>There are also ways to build in safety features, such as the incorporation of a microRNA target sequence into the influenza virus genome that results in inhibition of replication outside the laboratory setting3.
PPPs is the most dangerous subset - Potential Pandemic Pathogens. The question is not whether to do any gain of function research, but whether to include in that these risky pandemic potential viral elements.
There is some disagreement how useful PPP research is:
> expensive, often underpowered, low-throughput and often poorly generalizable, and which create pandemic risk — is better than using those resources to enhance the rest of the portfolio for flu preparedness
> In general, it is unnecessary and inappropriate to create new infectious agents that are capable of causing widespread harm. Genetic and biological contexts are important. As an example, genetic engineering that is intended and likely to endow a low-pathogenicity, low-transmissibility agent with either enhanced pathogenicity or enhanced transmissibility may be appropriate if the benefits are substantial. Conversely, creating a highly pathogenic, highly transmissible organism that does not already exist in nature is unnecessarily risky and potentially irresponsible.
The section above was written as a common effort of all the researchers.
Do you think a pause in funding is necessary for biosecurity reasons?
There is an element of self interest in protection and biosafety. The design of laboratories and procedures creates robust systems, with many layers, not a single point of failure and the personnel is vaccinated.
Comment: in the following you see more cases of lab leaks than you’d expect.
> There is no evidence that current biocontainment measures are insufficient; major laboratory-derived human outbreaks have not occurred during more than a century of scientific research on dangerous pathogens, even at times when biosafety measures were largely non-existent.
> Historical evidence has shown that even when there were human transmission events after laboratory accidents (such as the cases of severe acute respiratory syndrome (SARS) in Beijing, China), human cases were limited. Some people have argued that the 1977 Russian influenza epidemic was the result of a laboratory accident, but in 1977 influenza research was done on the bench (under conditions of limited biocontainment), and attenuated and wild-type strains were tested in humans; we do not know what happened in 1977, but we cannot conclude that the virus escaped a laboratory that met biosafety standards.
> Laboratory accidents happen, even in high containment settings. The recent events at the CDC in the United States, in which a strain of highly pathogenic avian influenza was accidentally shipped to another laboratory and in which a pathogen was taken out of a laboratory without proper inactivation, are just two examples.
There is an opportunity cost of not making this kind of research.
> the possibility that additional rules and regulations might end up slowing down the exact research that we require to protect ourselves from these pathogens is a real concern.
ML
> More than twice a week in US laboratories, there is a 'possible release event' or a 'possible loss event', even if we look only at select agents — some of the most dangerous pathogens. For every 1,000 lab-years of work in BSL-3 laboratories in the United States with select agents, there are at least 2 accidental infections
Comment: Source for this claims is here.
> This level of safety may be acceptable if the risk is to the laboratory workers only, as it is with most pathogens that are not readily transmissible. However, the same probability of an accident that could spark a global pandemic cannot be called acceptably safe.
DAR
> From a biosafety perspective, I believe that some of the work performed so far with highly pathogenic influenza viruses that have enhanced transmissibility in mammals has not been conducted at a sufficiently high enough biosafety level. From a biosecurity perspective, the unfettered dissemination of the complete genome sequence of a new, highly transmissible, highly pathogenic agent enables anyone skilled in the art to produce the agent de novo if a reverse genetics system for that class of agent is available, which could occur at locations that lack even basic biocontainment measures
W.P.D.
R.A.M.F.
M.J.I.
Should there be any changes to the procedures of reporting of studies and other paths of information circulation? A question by WPD
Strict confidentiality is not problematic among researchers, given the natural flow of conversation and society.
RAMF
Academic research is already in the public domain as it’s publicly funded and conducted by exchange international personnel, with the focus on transparency. It’s on the publishers to publish responsibly. But they are at the end of the chain so alternative channels are possible to scientists, so blocking just that path only will be inefficient.
MJI
Now, compared with 10-15 years ago we know there are malicious individuals and groups, and the cost of attempting something on a large scale decreases. More and more of the data that is released might be dangerous.
Authors are the best to gauge the risk, as they are familiar with complexity.
> this was the opinion of the US National Science Advisory Board for Biosecurity (NSABB) before the submission of the manuscripts describing GOF experiments that resulted in increased transmission of H5N1 influenza virus in mammals.
There is pressure to publish and risks might not be evident. Regarding installing the filter at journal stage - are the journal reviewers knowledgeable enough?
ML
For PPPs accident risk is enough to outweigh benefits.
DAR
Risk calculation should be before the actual research, but there is some unexpected, so that would not be enough. In a global society everyone shares risk and should get some benefit. The first duty is to do no harm.
Limiting dissemination of data is a routine used to minimize high risks, total restrictions should be just temporary, lighted as soon as the risks have been mitigated. We need more standardization of procedures for identification of dangerous information.
WPD
Has the debate been beneficial or too fear mongering? Question by WPD
THe debate was based around imprecise definition, embedded in rhetorical language. Process did not involve peer review, there was poor communication, people inhibiting different worlds, media bubbles. Media framing this as a fight don’t help.
RAMF
Not a debate, just sharing of tweets.
MJI
Results are both harmful and beneficial - people draw conclusions without learning all the facts. A bad thing is that the discussion was much in print, not real people in a room.
ML
> The public debate is long overdue and necessary.
DAR
Debate was beneficial, even if flawed - too emotional, too low diversity of stakeholders - only disease researchers, without vetting for possible conflicts of interest.
WPD
How to proceed with the debate? Is agreement possible? Question by WPD.
Actually, there’s an organization for this - scientists for science. The whole question is not just virologists, many microbiologists perform evolution and adaptation studies. Risk benefit analysis is better than ban, and consensus is possible.
RAMF
Discussion is very good and much needed, but full consensus is not realistic. That is because risks, benefits are not quantifiable, so ultimately it is a judgement call. There is a certain asymmetry between positions of stakeholders - intelligence risks vs scientific benefits - little alignment of incentives.
Therefore we should start with simple questions - estimate relative likelihood from nature vs lab, risk and benefit of research and efficiency of control.
MJI
In 2011 discussion wasn't full - didn’t include all perspectives (scientific, biosafety and ethics).
So it’s a good development that we debate, and we must come to an agreement as the stakes are high. Extremists might not compromise, but the majority of stakeholders can.
ML
There might never be a complete agreement. We can’t justify dangerous work when safer alternatives are out there, but the details depend on the species of virus.
DAR
We can definitely agree on the essentials. National academies of science need to cooperate with governments and international bodies, non-scientists thought leaders as well, doing deliberative democracy. We need a balance governance scheme for life sciences - norms and relationship with the government and general public
WPD