In the early morning of the last day of August, Parisians experienced for the first time a practice normally confined to tropical regions — authorities fumigating the city against the tiger mosquito. The event was a tangible confirmation of what public health stats already showed: Dengue, the deadly mosquito-borne disease, had well and truly arrived in Europe.
In 2022, Europe saw more cases of locally acquired dengue than in the whole of the previous decade. The rise marks both a public health threat and a corresponding market opportunity for dengue vaccines and treatments; news that should spur the pharma industry to boost investment into the neglected disease.
On the face of it, this shift would appear to benefit not only countries like France but also nations like Bangladesh and the Philippines that have long battled dengue.
But that assumption could be fatally flawed, experts told POLITICO.
People working in the field say the rise of dengue in the West could, in fact, make it harder to get lifesaving drugs to those who need them most, because pharma companies develop tools that are less effective in countries where the dengue burden is the highest or because wealthy nations end up hoarding these medicines and vaccines.
“It might look like a good thing — and it is a good thing — that we’re getting more products developed, but does it then create a two-tier system where high-income populations get access to it and then we still have the access gap for low- and middle- income countries?” asked Lindsay Keir, director of the science and policy advisory team at think tank Policy Cures Research.
Killer invading mosquitoes
Climate change and migration mean the mosquitoes that transmit dengue, as well as other diseases such as chikungunya and Zika, are setting up shop in Europe. The most recent annual data from the European Centre for Disease Prevention and Control shows that, in 2022, Europe saw 71 cases of locally acquired dengue: 65 in France and six in Spain.
While dengue usually results in mild or no symptoms, it can also lead to high fever, severe headache and vomiting. Severe dengue can cause bleeding from the gums, abdominal pain and, in some cases, death.
So far, the mosquito has mostly been confined to Southern Europe but it’s a worry across the Continent. In Belgium, the national public health research institute Sciensano has even launched an app where members of the public can submit photos of any Asian tiger mosquitos they spot.
The diseases spread by these mosquitoes have traditionally fallen under the umbrella of neglected tropical diseases, a group of infections that affect mainly low-income countries and struggle to attract research and development investment. But this is changing.
Policy Cures Research, which publishes an annual report on R&D investment into neglected diseases, removed dengue vaccines from their assessment in 2013. Dengue was no longer seen as an area where there was market failure, due to the emergence of a market that the private sector could tap into.
The organization is still tracking dengue drugs and biologics and their 2022 analysis showed a 33 percent increase in funding for research into non-vaccine products compared to the previous year, with industry investment reaching a record high of $28 million.
Climate change and migration mean the mosquitoes that transmit dengue, as well as other diseases such as chikungunya and Zika, are setting up shop in Europe | Lukas Schulze/Getty Images
Sibilia Quilici, executive director of the vaccine maker lobby group Vaccines Europe, said the most recent pipeline review of members found that roughly 10 percent were targeting neglected diseases. There is more R&D happening in this area, said Quilici.
Across the major drugmakers, J&J is working on a dengue antiviral treatment and MSD has a dengue vaccine in their pipeline, while Sanofi has a second yellow fever jab in development. Two dengue vaccines are already approved in the EU — one from Sanofi and another from Takeda. Moderna recently told POLITICO that it is looking closely at a dengue vaccine candidate and it already has a Zika candidate in the works.
For the few, not the many
But just because there might soon be larger markets for Big Pharma doesn’t mean the products will be suitable for the populations that have been waiting years for these tools.
Rachael Crockett, senior policy advocacy manager at the non-profit Drugs for Neglected Diseases initiative (DNDi), said increased pharma investment in a particular disease won’t necessarily lead to products developed that are globally relevant. “Industry will — and governments are also more likely to — focus on prevention,” she said.
That means tools such as vaccines will be prioritized; but in countries where dengue is endemic, the rainy season completely overburdens their health systems and what they desperately need are treatments, said Crockett.
She also said a massive increase in investment without a structure to ensure access to resulting products means “we have absolutely no guarantee that there isn’t going to be hoarding, [that] there isn’t going to be high prices.” Case in point: The U.S. national stockpile of Ebola vaccines, which exists despite there never having been an Ebola outbreak in the country.
But just because there might soon be larger markets for Big Pharma doesn’t mean the products will be suitable for the populations that have been waiting years for these tools | Noel Celis/AFP via Getty Images
Underlying many of these fears are the mistakes of the COVID-19 pandemic, which saw countries with less cash and political heft at the back of the queue when it came to vaccines.
Lisa Goerlitz, head of German charity Deutsche Stiftung Weltbevölkerung (DSW)’s Brussels office, warned if drug development picks up because of a growing market in high-income countries, then accessibility, affordability and other criteria that make it suitable for low resource settings might not be prioritized.
Vaccines Europe’s Quilici sought to allay these concerns, pointing to the pharma industry’s Berlin Declaration, a proposal to reserve an allocation of real-time production of vaccines in a health crisis. Quilici said this was a “really strong commitment …which comes right from the lessons learnt from COVID-19 and which could definitely overcome the challenges we had during the pandemic, if it is taken seriously.”
CORRECTION: This article has been updated to correct the spelling of Lisa Goerlitz.
LONDON — Back in the spring, Britain was sounding pretty relaxed about the rise of AI. Then something changed.
The country’s artificial intelligence white paper — unveiled in March — dealt with the “existential risks” of the fledgling tech in just four words: high impact, low probability.
Less than six months later, Prime Minister Rishi Sunak seems newly troubled by runaway AI. He has announced an international AI Safety Summit, referred to “existential risk” in speeches, and set up an AI safety taskforce with big global aspirations.
Helping to drive this shift in focus is a chorus of AI Cassandras associated with a controversial ideology popular in Silicon Valley.
Known as “Effective Altruism,” the movement was conceived in the ancient colleges of Oxford University, bankrolled by the Silicon Valley elite, and is increasingly influential on the U.K.’s positioning on AI.
Not everyone’s convinced it’s the right approach, however, and there’s mounting concern Britain runs the risk of regulatory capture.
The race to ‘God-like AI’
Effective altruists claim that super-intelligent AI could one day destroy humanity, and advocate policy that’s focused on the distant future rather than the here-and-now. Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs.
“The view is that the outcome of artificial super-intelligence will be binary,” says Émile P. Torres, philosopher and former EA, turned critic of the movement. “That if it’s not utopia, it’s annihilation.”
In the U.K., key government advisers sympathetic to the movement’s concerns, combined with Sunak’s close contact with leaders of the AI labs – which have longstanding ties to the movement – have helped push “existential risk” right up the U.K.’s policy agenda.
When ChatGPT-mania reached its zenith in April, tech investor Ian Hogarth penned a viral Financial Times article warning that the race to “God-like AI” “could usher in the obsolescence or destruction of the human race” – urging policymakers and AI developers to pump the brakes.
It echoed the influential “AI pause” letter calling for a moratorium on “giant AI experiments,” and, in combination with a later letter saying AI posed an extinction risk, helped fuel a frenzied media cycle that prompted Sunak to issue a statement claiming he was “looking very carefully” at this class of risks.
Known as “Effective Altruism,” the movement was conceived in the ancient colleges of Oxford University, bankrolled by the Silicon Valley elite, and is increasingly influential on the U.K.’s positioning on AI | Carl Court/Getty Images
“These kinds of arguments around existential risk or the idea that AI would develop super-intelligence, that was very much on the fringes of credible discussion,” says Mhairi Aitken, an AI ethics researcher at the Alan Turing Institute. “That’s really dramatically shifted in the last six months.”
The EA community credited Hogarth’s FT article with telegraphing these ideas to a mainstream audience, and hailed his appointment as chair of the U.K.’s Foundation Model Taskforce as a significant moment.
Under Hogarth, who has previously invested in AI labs Anthropic, Faculty, Helsing, and AI safety firm Conjecture, the taskforce announced a new set of partners last week – a number of whom have ties to EA.
Three of the four partner organizations on the lineup are bankrolled by EA donors. The Centre for AI Safety is the organization behind the “AI extinction risk” letter (the “AI pause” letter was penned by another EA-linked organization, the Future of Life Institute). Its primary funding – to the tune of $5.2 million – comes from major EA donor organization, Open Philanthropy.
Another partner is Arc Evals, which “works on assessing whether cutting-edge AI systems could pose catastrophic risks to civilization.”
It’s a project of the Alignment Research Centre, an organization that has received $1.5 million from Open Philanthropy, $1.25 million from high-profile EA Sam Bankman-Fried’s FTX Foundation (which it promised to return after the implosion of his crypto empire), and $3.25 million from the Survival and Flourishing Fund, set up by Skype founder and prominent EA, Jaan Tallinn. Arc Evals is advised by Open Philanthropy CEO, Harold Karnofsky.
Finally, the Community Intelligence Project, a body working on new governance models for transformative technology, began life with an FTX regrant, and a co-founder appealed to the EA community for funding and expertise this year.
Joining the taskforce as one of two researchers is Cambridge professor David Krueger, who has received a $1 million grant from Open Philanthropy to further his work to “reduce the risk of human extinction resulting from out-of-control AI systems”. He describes himself as “EA-adjacent.” One of the PhD students Kruger advises, Nitarshan Rajkumar, has been working with the British government’s Department for Science, Innovation and Technology (DSIT) as an AI policy adviser since April.
A range of national security figures and renowned computer scientist, Yoshua Bengio, are also joining the taskforce as advisers.
Combined with its rebranding as a “Frontier AI Taskforce” which projects its gaze into the future of AI development, the announcements confirmed the ascendancy of existential risk on the U.K.’s AI agenda.
‘X-risk’
Hogarth told the FT that biosecurity risks – like AI systems designing novel viruses – and AI-powered cyber-attacks weigh heavily on his mind.The taskforce is intended to address these threats, and to help build safe and reliable “frontier” AI models.
When ChatGPT-mania reached its zenith in April, tech investor Ian Hogarth penned a viral Financial Times article warning that the race to “God-like AI” “could usher in the obsolescence or destruction of the human race” | John Phillips/Getty Images
“The focus of the Frontier AI Taskforce and the U.K.’s broader AI strategy extends to not only managing risk, but ensuring the technology’s benefits can be harnessed and its opportunities realized across society,” said a government spokesperson, who disputed the influence of EA on its AI policy.
But some researchers worry that the more prosaic threats posed by today’s AI models, like bias, data privacy, and copyright issues, have been downgraded. It’s “a really dangerous distraction from the discussions we need to be having around regulation of AI,” says Aitken. “It takes a lot of the focus away from the very real and ethical risks and harms that AI presents today.”
The EA movement’s links to Silicon Valley also prompt some to question its objectivity. The three most prominent AI labs, OpenAI, DeepMind and Anthropic, all boast EA connections – with traces of the movement variously imprinted on their ethos, ideology and wallets.
Tech mogul Elon Musk claims to be a fan of the closely related “longtermist” ideology, calling it a “close match” to his own. Musk recently hired Dan Hendrycks, director of Center for AI Safety, as an adviser to his new start-up, xAI, which is also doing its part to prevent the AI apocalypse.
To counter the threat, the EA movement is throwing its financial heft behind the field of AI safety. Head of Open Philanthropy, Harold Karnofsky,wrote a February blog post announcing a leave of absence to devote himself to the field, while an EA career advice center, 80,000 hours, recommends “AI safety technical research” and “shaping future governance of AI” as the two top careers for EAs.
Tech mogul Elon Musk claims to be a fan of the closely related “longtermist” ideology, calling it a “close match” to his own | Dimitrios Kambouris/Getty Images for The Met Museum/Vogue
Trading in an insular jargon of “X-risk” (existential risks) and “p(doom)” (the probability of our impending annihilation), the AI-focused branch of effective altruism is fixated on issues like “alignment” – how closely AI models are attuned to humanity’s value systems – amid doom-laden warnings about “proliferation” – the unchecked propagation of dangerous AI.
Despite its popularity among a cohort of technologists,critics say the movement’s thinking lacks evidence and is alarmist. A vocal critic, former Googler Timnit Gebru, has denounced this “dangerous brand of AI safety,” noting that she’d seen the movement gain “alarming levels of influence” in Silicon Valley.
Meanwhile, the “strong intermingling” of EAs and companies building AI “has led…this branch of the community to be very subservient to the AI companies,” says Andrea Miotti, head of strategy and governance at AI safety firm Conjecture. He calls this a “real regulatory capture story.”
The pitch to industry
Citing the Center for AI Safety’s extinction risk letter, Hogarth called on AI specialists and safety researchers to join the taskforce’s efforts in June, noting that at “a pivotal moment, Rishi Sunak has stepped up and is playing a global leadership role.”
On stage at the Tony Blair Institute conference in July, Hogarth – perspiring in the midsummer heat but speaking with composed conviction – struck an optimistic note. “We want to build stuff that allows for the U.K. to really have the state capacity to, like, engineer the future here,” he said.
Although the taskforce was initially intended to build up sovereign AI capability, Hogarth’s arrival saw a new emphasis on AI safety. The U.K. government’s £100 million commitment is “the largest amount ever committed to this field by a nation state,” he tweeted.
Despite its popularity among a cohort of technologists,critics say the movement’s thinking lacks evidence and is alarmist | Hollie Adams/Getty Images
The taskforce recruitment ad was shared on the Effective Altruism forum, and Hogarth’s appointment was announced in Effective Altruism UK’s July newsletter.
Hogarth is not the only one in government who appears to be sympathetic to the EA movement’s arguments. Matt Clifford, chair of government R&D body, ARIA, and adviser to the AI taskforce as well as AI sherpa for the safety summit, has urged EAs to jump aboard the government’s latest AI safety push.
“I would encourage any of you who care about AI safety to explore opportunities to join or be seconded into government, because there is just a huge gap of knowledge and context on both sides,” he said at the Effective Altruism Global conference in London in June.
“Most people engaged in policy are not familiar … with arguments that would be familiar to most people in this room about risk and safety,” he added, but cautioned that hyping apocalyptic risks was not typically an effective strategy when it came to dealing with policymakers.
Clifford said that ARIA would soon announce directors who will be in charge of grant-giving across different areas. “When you see them, you will see there is actually a pretty good overlap with some prominent EA cause areas,” he told the crowd.
A British government spokesperson said Clifford is “not part of the core Effective Altruism movement.”
Civil service ties
Influential civil servants also have EA ties. Supporting the work of the AI taskforce is Chiara Gerosa, who in addition to her government work is facilitating an introductory AI safety course “for a cohort of policy professionals” for BlueDot Impact, an organization funded by Effective Ventures, a philanthropic fund that supports EA causes.
The course “will get you up to speed on extreme risks from AI and governance approaches to mitigating these risks,” according to the website, which states alumni have gone on to work for the likes of OpenAI, GovAI, Anthropic, and DeepMind.
People close to the EA movement say that its disciples see the U.K.’s AI safety push as encouragement to get involved and help nudge policy along an EA trajectory.
EAs are “scrambling to be part of Rishi Sunak’s announced Foundation Model Taskforce and safety conference,” according to an AI safety researcher who asked not to be named as they didn’t want to risk jeopardizing EA connections.
EAs are “scrambling to be part of Rishi Sunak’s announced Foundation Model Taskforce and safety conference,” according to an AI safety researcher | Pool photo by Justin Tallis via AFP/Getty Images
“One said that while Rishi is not the ‘optimal’ candidate, at least he knows X-risk,” they said. “And that ‘we’ need political buy-in and policy.”
“The foundation model taskforce is really centring the voices of the private sector, of industry … and that in many cases overlaps with membership of the Effective Altruism movement,” says Aitken. “That to me, is very worrying … it should really be centring the voices of impacted communities, it should be centring the voices of civil society.”
Jack Stilgoe, policy co-lead of Responsible AI, a body funded by the U.K.’s R&D funding agency, is concerned about “the diversity of the taskforce.” “If the agenda of the taskforce somehow gets captured by a narrow range of interests, then that would be really, really bad,” he says, adding that the concept of alignment “offers a false solution to an imaginary problem.”
A spokesperson for Open Philanthropy, Michael Levine, disputed that the EA movement carried any water for AI firms. “Since before the current crop of AI labs existed, people inspired by effective altruism were calling out the threats of AI and the need for research and policies to reduce these risks; many of our grantees are now supporting strong regulation of AI over objections from industry players.”
From Oxford to Whitehall, via Silicon Valley
Birthed at Oxford University by rationalist utilitarian philosopher William MacAskill, EA began life as a technocratic preoccupation with how charitable donations could be optimized to wring out maximal benefit for causes like global poverty and animal welfare.
Over time, it fused with transhumanist and techno-utopian ideals popular in Silicon Valley, and a mutated version called “long-termism” that is fixated on ultra-long-term timeframes now dominates. MacAskill’s most recent book What We Owe the Future conceptualizes a million-year timeframe for humanity and advocates the colonization of space.
EA began life as a technocratic preoccupation with how charitable donations could be optimized to wring out maximal benefit for causes like global poverty and animal welfare. Over time, it fused with transhumanist and techno-utopian ideals popular in Silicon Valley | Mason Trinca/Getty Images
Oxford University remains an ideological hub for the movement, and has spawned a thriving network of think tanks and research institutes that lobby the government on long-term or existential risks, including the Centre for the Governance of AI (GovAI) and the Future of Humanity Institute at Oxford University.
Other EA-linked organizations include Cambridge University’s Centre for the Study of Existential Risk, which was co-founded by Tallinn and receives funding from his Survival and Flourishing Fund – which is also the primary funder of the Centre for Long Term Resilience, set up by former civil servants in 2020.
The think tanks tend to overlap with leading AI labs, both in terms of membership and policy positions. For example, the founder and former director of GovAI, Allan Dafoe, who remains chair of the advisory board, is also head of long-term AI strategy and governance at DeepMind.
“We are conscious that dual roles of this form warrant careful attention to conflicts of interest,” reads the GovAI website.
GovAI, OpenAI and Anthropic declined to offer comment for this piece. A Google DeepMind spokesperson said: “We are focused on advancing safe and responsible AI.”
The movement has been accruing political capital in the U.K. for some time, says Luke Kemp, a research affiliate at the Centre for the Study of Existential Risk who doesn’t identify as EA. “There’s definitely been a push to place people directly out of existential risk bodies into policymaking positions,” he says.
The movement has been accruing political capital in the U.K. for some time, says Luke Kemp, a researcher at the Centre for the Study of Existential Risk who doesn’t identify as EA | Pool photo by Stefan Rousseau via AFP/Getty Images
CLTR’s head of AI policy, Jess Whittlestone, is in the process of being seconded to DSIT on a one day a week basis to assist on AI policy leading up to the AI Safety Summit, according to a CLTR August update seen by POLITICO. In the interim, she is informally advising several policy teams across DSIT.
A former specialist adviser to the Cabinet Office meanwhile, Markus Anderljung, is now head of policy at GovAI.
Kemp says he has expressed reservations about existential risk organizations attempting to get staff members seconded to government. “We can’t be trusted as objective and fair regulators or scholars, if we have such deep connections to the bodies we’re trying to regulate,” he says.
“I share the concern about AI companies dominating regulatory discussions, and have been advocating for greater independent expert involvement in the summit to reduce risks of regulatory capture,” said CLTR’s Head of AI Policy, Dr Jess Whittlestone. “It is crucial for U.K. AI policy to be informed by diverse perspectives.”
Instead of the risks of existing foundation models like GPT-4, EA-linked groups and AI companies tend to talk up the “emergent” risks of frontier models — a forward-looking stance that nudges the regulatory horizon into the future.
This framing “is a way of suggesting that that’s why you need to have Big Tech in the room – because they are the ones developing these frontier models,” suggests Aitken.
At the frontier
Earlier in July, CLTR and GovAI collaborated on a paper about how to regulate so-called frontier models, alongside members of DeepMind, OpenAI, and Microsoft and academics. The paper explored the controversial idea of licensing the most powerful AI models, a proposal that’s been criticized for its potential to cement the dominance of leading AI firms.
Earlier in July, CLTR and GovAI collaborated on a paper about how to regulate so-called frontier models, alongside members of DeepMind, OpenAI, and Microsoft and academics | Lionel Bonaventure/AFP via Getty Images
CLTR presented the paper to No. 10 with the prime minister’s special advisers on AI and the director and deputy director of DSIT in attendance, according to the CLTR memo.
Such ideas appear to be resonating. In addition to announcing the “Frontier AI Taskforce”, the government said in September that the AI Summit would focus entirely on the regulation of “frontier AI.”
The British government disputes the idea that its AI policy is narrowly focused. “We have engaged extensively with stakeholders in creating our AI regulation white paper, and have received a broad and diverse range of views as part of the recently closed consultation process which we will respond to in due course,” said a spokesperson.
Spokespeople for CLTR and CSER said that both groups focus on risks across the spectrum, from near-term to long-term, while a CLTR spokesperson stressed that it’s an independent and non-partisan think tank.
Some say that it’s the external circumstances that have changed, rather than the effectiveness of the EA lobby. CSER professor Haydn Belfield, who identifies as an EA, says that existential risk think tanks have been petitioning the government for years – on issues like pandemic preparedness and nuclear risk in addition to AI.
Although the government appears more receptive to their overtures now, “I’m not sure we’ve gotten any better at it,” he says. “I just think the world’s gotten worse.”
Update: This story has been updated to clarify Luke Kemp’s job title.
It’s meant to be a legally binding deal that could prevent the next pandemic.
Originally proposed by European Council President Charles Michel in the worst days of the COVID-19 pandemic, the aim is to create a new set of rules to guide countries on pandemic preparedness and response.
But with countries fiercely divided on key issues and just 12 months left to agree, it’s looking increasingly likely that the text will end up as a damp squib.
As the who’s who of global health descends on Geneva in the coming days for the World Health Assembly — the annual meeting of the decision-making body of the World Health Organization — the fate of the treaty will be the main topic of discussion over glasses of champagne at swanky receptions.
The behemoth draft version of the text was ambitious, covering everything from access to vaccines to strengthening health systems so they can respond to health crises.
But with countries facing off over intellectual property rights and the rules around sharing medical products developed during a pandemic, a compromise with any substance looks increasingly difficult to reach.
“If the groups can give up a little bit and try to compromise, I think that in the middle, we might have something left … we might have something that is useful for the future,” said a Geneva-based diplomat, who requested anonymity to talk about confidential negotiations. However, they added that the “fallback position might be a treaty with a little bit of content — just a little bit.”
And then there’s the all-important question: How to ensure that countries actually comply with what’s agreed. “A treaty with no compliance mechanism is just a piece of paper,” warned Nina Schwalbe, founder of the public health think tank Spark Street Advisers and former senior official at UNICEF and Gavi, the Vaccine Alliance.
POLITICO walks you through the biggest sticking points:
Face-off with Big Pharma
There are two highly contentious proposals in the draft text. One calls on countries to take measures to support time-bound waivers of IP rights so that companies other than patent holders could make vaccines or treatments — an issue that countries never truly succeeded in solving during the COVID-19 pandemic. The second is to ensure that countries that share information about dangerous pathogens can access any resulting treatments and vaccines developed using this data.
Developing countries see these as central to ensuring equity in the next pandemic. But both are fiercely opposed by Big Pharma, which has the backing of some wealthy Western nations.
On intellectual property rights, the U.S. has taken a big red pen to the draft text, stripping out mention of waivers of intellectual property rights. It also wants to weaken provisions that would require pharmaceutical companies to license other manufacturers to produce their products.
The U.S. wants to weaken provisions that would require pharmaceutical companies to license other manufacturers to produce their products | Thibaud Moritz/AFP via Getty Images
For the debate over whether sharing information regarding new pathogens should be linked to some kind of benefit — potentially monetary — the line is less clear. The Global South, which is pushing to include the benefits link, has the biggest ask, said a second Geneva-based diplomat who also requested anonymity to talk about confidential negotiations. But a flat no from the Global North could see them lose timely access to those pathogens — something that could delay the development of pathogen-specific vaccines or treatments, and cost lives.
Too many cooks, too little time
When WHO members agreed in December 2021 to negotiate a pandemic treaty by May 2024, the deadline seemed a lifetime away. But a lot of time was lost at the start of the process on procedural matters, said the first diplomat. That delay was likely “strategic at some point also for some groups,” they said, without specifying who they were referring to.
There’s no denying that the text tries to cover a lot of ground, much of it highly controversial. Given that, the deadline of May 2024 is “an extreme challenge,” said the second diplomat. What may be necessary is a streamlining of sorts. “It’s not about lowering the ambition but maybe lowering the level of detail,” they said.
Ambassador Nora Kronig, head of the international affairs division in the Swiss Federal Office of Public Health, told POLITICO that there is still uncertainty about the scope and content of the treaty. “There’s still a lot of work ahead of us to make it tangible and realistic and implementable,” she said.
‘Just a piece of paper’
Perhaps the biggest question is how the treaty will actually be enforced.
“There hasn’t been a lot of discussion about this because it touches on the difficult issue about sovereignty and about having an international organization or other countries, [having] a look on what you do, [and] on how you prepare,” said the second diplomat.
In a draft text, countries including China, Russia, Iran, Namibia and Egypt express strong reservations about monitoring mechanisms such as a peer review process, where countries would carry out regular reviews of each others’ pandemic preparedness. Meanwhile, the EU, Canada and Switzerland have put forward proposals for stronger language on monitoring how ready a country is for a health crisis.
Some countries fear a naming and shaming process, but it doesn’t matter how well-prepared one country is, if another isn’t, said the first diplomat. “I think that we should be accountable to each other, and we should be transparent, and we should try our best to allocate resources and also to make the necessary changes to improve, and also to help others to improve,” they said.
Some observers want to go even further. Schwalbe would like to see a committee of independent people reporting on the treaty. “Whatever’s in it, we need to hold states accountable for what they’ve agreed to,” she said.
Ultimately, the outcome will be “the fruit of international negotiations,” said the second diplomat. “Of course, it will be the [lowest] common denominator.”
But their view is that if it binds countries on anything new then it’s worth something. “One could see anything that those countries agree upon [as] progress, even if it is watered down and it is incremental or iterative,” they said.