Elon Musk said on Saturday that he will file a “thermonuclear lawsuit” against non-profit watchdog Media Matters and others, as companies including Disney, Apple and IBM reportedly have paused advertising on X amid an antisemitism storm around the social media platform.
Media Matters, a U.S. group that describes itself as “a progressive research and information center” that monitors “media outlets for conservative misinformation,” published earlier this week research showing that X has posted ads appearing next to pro-Nazi posts.
X CEO Linda Yaccarino previously said that brands are now “protected from the risk of being next to” potentially toxic content on the platform.
“The split second court opens on Monday,” Musk said in a post on X on Saturday. “X Corp will be filing a thermonuclear lawsuit against Media Matters and ALL those who colluded in this fraudulent attack on our company,” he said.
Musk also posted a statement with the headline “Stand with X to protect free speech” where he said that Media Matters “completely misrepresented the real user experience on X.” He also said that “for speech to be truly free, we must also have the freedom to see or hear things that some people may consider objectionable” and added that “we will not allow agenda driven activists, or even our profits, to deter our vision.”
Musk, owner of Tesla and Space X, who bought Twitter last year and renamed it X, was already under fire for tolerating and even encouraging antisemitism on the social media platform. The latest episode was this week when Musk endorsed an antisemitic post on X as “the actual truth” of what Jewish people were doing.
The antisemitic post said that “Jewish communties (sic) have been pushing the exact kind of dialectical hatred against whites that they claim to want people to stop using against them.” The post also referenced “hordes of minorities” flooding Western countries, a popular antisemitic conspiracy theory.
The companies suspending advertising on X include Disney, IBM, Apple, Paramount, NBCUniversal, Comcast, Lionsgate and Warner Bros. Discovery, according to media reports.
BRUSSELS — When EU digital chief Věra Jourová sat down in Beijing with a senior Chinese official in September, her complaint list was as long as the 11-course dinner her host had prepared.
Sore points included Beijing’s disinformation campaigns, electoral interference, state control over Artificial Intelligence development, and ties with Russia.
Predictably, Jourová didn’t get many straight answers from her counterpart, Vice Premier Zhang Guoqing. It’s a nail-biting time to be a politician in China, as major figures such as Qin Gang and Li Shangfu have recently been purged as foreign and defense ministers, and no one wants to be accused of making big concessions to the West.
Then, in a sudden surprise initiative, Zhang said he was ready to offer a goodie to European businesses facing an increasingly hostile political environment in President Xi Jinping’s China. He explained Beijing was willing to move on data flows — a sphere where China has been trying to curb the ability of foreign companies to export data generated within the country. All that data is a goldmine for European business, but China guards it zealously.
A deal on data flows was a big call from Zhang, but can be explained by China’s growing fears about its precarious economy. While security is front-and-center to Chinese policymakers, they also know they have to offer some big carrots to keep foreign investors onside.
“You could feel that something clicked on the spot,” said an EU official with knowledge of the discussion, recalling the heated debates on data over Chinese delicacies like beef in lotus leaves and dim sum.
Although the dinner happened in September, three officials with knowledge of China’s switching tack have only now explained how the change of heart in Beijing came about.
“The vice-premier told her he understood the proposal makes sense, and asked the relevant authorities to take the matter forward,” the first official said. Zhang immediately turned to his junior colleagues from the Cyberspace Administration of China and the Ministry of Industry and Information Technology. “You had a feeling that that was the moment the big guy gave the go-ahead.”
According to another official, when Trade Commissioner Valdis Dombrovskis visited Beijing shortly after Jourová, he received the final confirmation of the changes to the data laws from his counterpart, Vice Premier He Lifeng, an influential economic aide to President Xi Jinping.
Shortly afterward, China agreed to reverse the burden of proof under the relevant laws, allowing most data stored in China to be transferred out of the country unless expressly excluded by the authorities. EU officials, though, cautioned that they’ll still wait to see how Chinese authorities at all levels implement the new provision.
Special gift to Europe
Even though U.S., Japanese and other companies had also been pushing for this kind of measure from Beijing on data, China offered the diplomatic win to the EU.
The European Union Chamber of Commerce, among the first to be notified when Beijing made the legal revision, sent Jourová a congratulatory letter, seen by POLITICO.
China’s Vice Premier Zhang Guoqing | Lintao Zhang/Getty Images
“Make no mistake, China is merely fixing a problem of its own making,” the second official noted. “It’s not an act of benevolence. It’s an act of self-correction.”
Still, that self-correction is far from a given under a nationalistic government facing stiff competition from the U.S.
Increasingly, China’s uncompromising ideological focus is forcing many companies to adjust their business strategies, including by taking their new investments out of China. Indeed, the EU and the rest of the G7 rich democracies are calling on their companies to “de-risk,” as Russia’s war against Ukraine prompts concerns about a possible Chinese invasion of Taiwan.
According to a report issued Wednesday by Penta, a business research group, one in five EU policymakers considers China to be the most pressing issue facing the bloc — while only 16 percent of people say they’re open to working with companies from China, bottom of the list.
It’s against this backdrop that Beijing wants — and needs — to throw some bones to the EU.
“For sure there’s a lot of self-interest for China [to give EU the data deal], where there’s a sharp drop of foreign direct investment which China desperately needs,” the first official said.
European Council President Charles Michel and European Commission President Ursula von der Leyen | Kenzo Tribouillard/AFP via Getty Images
Over the past three months, Beijing has welcomed a long line of EU officials in a thaw from the 2021 low point where China’s sanctions on EU politicians and intellectuals were followed by an indefinite freeze of a massive EU-China trade deal, which remains unratified.
Commission President Ursula von der Leyen and her European Council counterpart Charles Michel are expected to attend an EU-China Summit in December and meet Chinese President Xi Jinping.
EU officials should use China’s underperforming economy — most specifically in the real estate sector — as leverage, according to Luisa Santos, deputy director of BusinessEurope, a Brussels-based lobby group, who is currently visiting China.
Speaking before her trip, Santos described the Chinese economy as “not in a great situation,” adding that EU officials should seize this opportunity to convince Beijing to open up further.
“China needs to recognize that what is happening in our bilateral relationship is something that is not sustainable,” she said.
Firearms have accounted for the deaths of more American children than any other cause since 2020. The true damage guns inflict on children is larger still, as demonstrated by a new study showing that emergency-room visits for children injured by firearms nearly doubled during the pandemic.
In a survey of nine U.S. hospitals, a team led by Dr. Jennifer Hoffmann, a pediatric emergency medicine physician at Lurie Children’s Hospital of Chicago, found that pediatric emergency room visits due to gun shots increased from 694 in the years before the pandemic to 1,210 during the pandemic, a 74% increase, according to data from 2017 through 2022. During that time, the death rate among gun victims age 18 and under nearly doubled as well, from 3.1% to 6.1% of all children injured by firearms.
That increase was apparent to physicians who worked in emergency rooms throughout the pandemic, Hoffmann says. But for the first time in decades, she and other researchers were able to secure federal funding to study what they were seeing, thanks to a longtime freeze on grants supporting research on gun violence that was only lifted in 2020. Hoffmann’s study is one of two published yesterday (Nov. 6) that help reveal the full extent of the problems gun violence poses to American kids, their families, and the health care economy.
More From TIME
An increase in child firearm injuries early on in the pandemic was initially thought to be the direct result of a wave of firearm purchases during lockdown, combined with heightened emotions and the dramatic changes to Americans’ daily routines, Hoffmann says. “We hoped that as the daily impact of the pandemic decreased, that we would see a decline in firearm injuries,” she says. “But instead, we saw that the elevated levels of visits persisted and remain significantly elevated.”
Hoffmann also found that gunshot-related pediatric emergency room visits increased only among Black and Hispanic youth, indicating what she calls “a widening of the disparities” that existed between these groups and their white peers before the pandemic.
The data, published Nov. 6 in the journal Pediatrics, doesn’t offer many other clues about the possible origins of this alarming trend. Firearm injuries are sorted into three types in hospital reporting and billing systems: Accidental injuries, often the result of improper gun storage and curious kids; self-inflicted injuries, most the results of suicide attempts; and assault injuries. This simple categorization can easily miss a lot of nuance that could be helpful for researchers, explains Hoffmann. Clear early-pandemic factors, like increased gun buying, poor teen mental health, and rising community violence levels, respectively, seem to be easy explanations for rising numbers of incidents in each injury category, but it’s likely that there are many more unknown causes at play. The proportions of injury categories stayed fairly constant over time, says Hoffmann, which makes it even harder to point to any one cause of increased injury.
This data is still important to have, since research on gun violence tends to focus on fatalities, says Dr. Zirui Song, an associate professor of health care policy and medicine at Harvard Medical School. “What is often forgotten is the much larger number of people in America each year who sustain firearm injuries but are able to survive,” he says. Song’s own work, including a study also released yesterday, takes a more expansive look at the impacts of firearm injuries in children by analyzing the consequences for those around them.
His paper in Health Affairs demonstrates what he calls the “shared family trauma” that occurs when a child is injured or killed by gunfire, which includes a more than 30% increase in psychiatric disorders among the parents of survivors.
Siblings of victims, too, are deeply affected. Although some family members rely on mental health services more often after a child is injured or killed, Song found that routine medical care often fell by the wayside for the siblings and mothers of survivors, with a decrease of between 5% to 14% for various visits and procedures. “This doesn’t necessarily mean that siblings were unharmed or sitting at home and just okay with it,” says Song, who has cared for the families of victims as an internal medicine physician at Massachusetts General Hospital. Instead, he believes it’s more likely a reflection of trauma remaining so unaddressed that it prevents families from engaging with care altogether.
Song also found that in the first year after being injured, health care spending for young survivors went up an average of $34,884, an economic burden shouldered almost entirely by insurers and employers, and one that he hopes continues to be highlighted in conversations about protecting children. (Song’s study included only families with employer-sponsored insurance, and he hopes to replicate it with families insured by public programs like Medicaid.) “So often in the healthcare system, moral arguments don’t move the needle on something,” he says. “In this case, children dying from firearm injuries have not moved the needle, but often dollars do. Gun violence is not only a medical issue and a public health issue, but it is increasingly an economic issue for our country.”
If it feels like researchers are trying to account for all sides of the issue at once, it’s because they’ve been left with little other choice, says Hoffmann. They’re rushing to fill the gap left by years of growing gun ownership without the resources to track it. With papers like Hoffmann’s and Song’s, researchers are still putting together an initial picture of gun violence today. “We’re decades behind where we should be and understanding why these increases are occurring and what we can do about them,” she says. The same, then, goes for the extent of their impact.
Correction, Nov. 7
The original version of this story misspelled the name of a pediatric emergency medicine physician at Lurie Children’s Hospital of Chicago. She is Jennifer Hoffmann.
Lincolnshire, IL–95 Percent Group LLC, the trusted source for proven literacy solutions, unveiled 95 Phonemic Awareness Suite™, a comprehensive program for developing awareness of speech sounds for students grades K-1. Aligned with the latest research on phonemic awareness and part of the One95™ Literacy Ecosystem™, the new suite includes core and intervention lessons, intervention tools, assessments and teacher professional learning.
Building phonemic awareness means developing the understanding that spoken words are made up of specific sounds, called phonemes. The focus of phonemic awareness is on those sounds, but recent research reports that good phonemic awareness instruction makes the critical connection to the grapheme—letters or groups of letters—that represents the sound. The 95 Phonemic Awareness Suite is a prime example of this research brought to life in the classroom.
“Building a foundation in the ways that written words connect to spoken words begins with phonemic awareness. Phonemic awareness is essential for developing literacy skills and a strong predictor of reading success,” said Laura Stewart, Chief Academic Officer, 95 Percent Group. “Our new 95 Phonemic Awareness Suite is grounded in the current research on phonemic awareness, providing teachers with an evidence-based, comprehensive program that will help young learners develop a foundation for becoming proficient readers.”
95 Phonemic Awareness Suite gives teachers the full array of tools they need to help K-1 students master critical skills. At the core of the suite is 95 Pocket PA™, which provides teachers with lessons to develop students’ phonemic awareness in just 10 minutes per day. 95 Pocket PA includes 50 weeks of lessons for Tier 1 students, including digital presentation files and articulation videos.
Providing additional support for students in need of intervention (Tier 2), 95 Phonemic Awareness Intervention Resource™ (PAIR) is aligned with Pocket PA, supporting a seamless transition to intervention that is based in familiar routines and instructional dialogue. Intervention resources include a teacher’s guide, Kid Lips Cards, Sound Spelling Cards and a Student Manipulatives Kit.
Teachers can pinpoint student skill gaps and differentiate instruction with 95 Phonemic Awareness Suite’s easy-to-administer assessment, 95 Phonemic Awareness Screener for Intervention™. Digital assessments are delivered over the new One95 Literacy Platform.
In addition, the suite provides professional learning for teachers, equipping them with knowledge and best practices grounded in the latest research on phonological processing, phonology and phonetics; training on implementing the suite in the classroom; and a practice-informed, follow-up session on acting on assessment data.
“This is the phonemic awareness suite every school needs to help young learners grow into readers,” said Jennifer Harris, Chief Product Officer, 95 Percent Group. “It is intentionally designed to be easy-to-use, fun and engaging, comprehensive, and effective for all students including those with language variations.”
For additional information on the new 95 Phonemic Awareness Suite, read this Q&A.
About 95 Percent Group
95 Percent Group is an education company whose mission is to build on science to empower teachers—supplying the knowledge, resources and support they need—to develop strong readers. Using an approach that is based in structured literacy, the company’s One95™ Literacy Ecosystem™ integrates professional learning and evidence-based literacy products into one cohesive system that supports consistent instructional routines across tiers and is proven and trusted to help students close skill gaps and read fluently. 95 Percent Group is also committed to advancing research, best practices, and thought leadership on the science of reading more broadly.
eSchool Media staff cover education technology in all its aspects–from legislation and litigation, to best practices, to lessons learned and new products. First published in March of 1998 as a monthly print and digital newspaper, eSchool Media provides the news and information necessary to help K-20 decision-makers successfully use technology and innovation to transform schools and colleges and achieve their educational goals.
Scientists are turning data into music to see if it can help us understand large and intricate datasets in new and interesting ways.
Tampere University and Eastern Washington University’s groundbreaking “data-to-music” algorithm research transforms intricate digital data into captivating sounds. And the researchers have presented a novel and potentially revolutionary approach to data comprehension.
Sonic Data Interpretation
At TAUCHI (Tampere Unit for Computer-Human Interaction) in Finland and Eastern Washington University in the USA, a dynamic research group dedicated half a decade to exploring the merits of data conversion into musical sounds. Funded by Business Finland, their groundbreaking findings have been encapsulated in a recent research paper.
Jonathan Middleton, DMA, the main contributor to the study, serves as a professor of music theory and composition at Eastern Washington University. Simultaneously, he is recognized as a visiting researcher at Tampere University. Under his guidance, the research pivoted on enhancing user engagement with intricate data variables using “data-to-music” algorithms. To exemplify their approach, the team utilized data extracted from Finnish meteorological records.
Middleton emphasizes the transformative potential of their findings. “In today’s digital era, as data collection and deciphering become intertwined with our routine, introducing fresh avenues for data interpretation becomes crucial.” So, he champions the concept of a ‘fourth’ dimension in data interpretation, emphasizing the potential of musical characteristics.
Turning Data Into Music
Music is not just an art form; it captivates, entertains, and resonates with human emotions. It enhances the experience of films, video games, live performances, and more. Now, imagine the potential of harnessing music’s emotive power to make sense of complex data sets.
Picture a basic linear graph displaying heart rate data. Now, amplify that visualization with a three-dimensional representation enriched with numbers, hues, and patterns. But the true marvel unfolds when a fourth dimension is introduced, where one can audibly engage with this data. Middleton’s quest revolves around identifying which mode or dimension maximizes understanding and interpretation of the data.
For businesses and entities that anchor their strategies on data interpretation to tailor offerings, Middleton’s research presents profound implications. So he believes that their findings lay the groundwork for data analysts worldwide to tap into this fourth, audial dimension, enhancing understanding and decision-making.
A Symphony of Data Possibilities
As data continues to drive decision-making processes across industries, the quest for innovative interpretation techniques remains relentless. Tampere University and Eastern Washington University’s “data-to-music” research illuminates a path forward. With the potential to hear and emotionally connect with data, industries can achieve a deeper understanding, making data analysis not just a technical task but also an engaging sensory experience.
LONDON — London and Washington are to announce a “close collaboration” on AI safety as early as Wednesday, a U.K and U.S. official confirmed to POLITICO.
The collaboration is expected to marry new guardrails the White House placed on artificial intelligence development in this week’s executive order (EO) with existing work by the United Kingdom’s “Frontier AI Taskforce.”
“We plan to announce close bilateral collaboration with the U.S. safety institute this week,” a U.K. official close to the planning of Britain’s AI safety summit told POLITICO. The person was granted anonymity to talk about the summit, which will take place at Bletchley Park on Nov. 1 and 2.
Both countries will be announcing their own version of the institutes as the summit kicks off. In a speech Wednesday in London, U.S. Vice President Kamala Harris, who is representing the Biden administration at the summit, will announce the United States AI Safety Institute, which will be housed at the Department of Commerce, according to a U.S. official granted anonymity to discuss internal plans.
“It will work to create guidelines, standards and best practices for evaluating and mitigating the full spectrum of risks,” the U.S. official added. “We must address the full spectrum of risk, from potentially catastrophic risks to societal harms that are already happening such [as] bias, discrimination and the proliferation of misinformation.”
Meanwhile, British Prime Minister Rishi Sunak has said he will set up an “AI Safety Institute” that will examine, evaluate and test new types of the emerging technology. Sunak said the new institute will build on the work of Britain’s existing Frontier AI Taskforce, which he said has already been granted “privileged access” to the technology models of leading AI companies like Google DeepMind, Anthropic and OpenAI.
The countries will “also participate in information sharing and research collaboration,” said the U.S. official, and will be making their own separate announcements. The U.S. will also share information with other similar safety institutes in other countries.
The White House executive order signed Monday will require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. It is designed to ensure AI systems are safe before companies make them public. Under the EO, Washington will set up an “AI Safety and Security Board.”
The U.K.’s Tech Secretary Michelle Donelan told POLITICO that it was easier for the U.S. to lead the industry to be more transparent because it is dominated by American firms | Dan Kitwood/Getty Images
“We’re trying to lead with substance here and we’re trying to engage with other countries with substance and this is a vision, and the Vice President will lay it out in her speech, […] for how the United States is seeing AI policy and AI governance,” said the White House special adviser on AI, Ben Buchanan, on the forthcoming episode of the POLITICO Tech podcast on the timing of the EO coming in the same week as the U.K. AI summit. Harris is giving a speech in London on the administration’s AI initiatives, including the EO on Wednesday afternoon.
The U.K.’s Tech Secretary Michelle Donelan told POLITICO on Tuesday that it was easier for the U.S. to lead the industry to be more transparent because it is dominated by American firms, but there are aspects of the work that the U.K. can move faster on.
“I know America and other countries will have plans for institutes too, but we can do it a lot quicker, because we already have that initial organization in the [Frontier AI Taskforce],” she said. “We’ve already got that expertise setup, funding in there, and our processes allow us to do that at a quicker speed.”
“The future vision is to secure the safety of models before they are released,” Sunak said Thursday. Britain is expected to publish some information publicly, but will reserve more sensitive national security intel to a smaller group of like-minded governments.
[ad_2]
Vincent Manancourt, Eugene Daniels and Annabelle Dickson
LONDON — Britain’s tech chief is no stranger to dealing with big egos. She used to promote superstar wrestlers.
U.K. Science and Technology Secretary Michelle Donelan’s past career as a marketeer for WWE wrestling may stand her in good stead at Bletchley Park on Wednesday, as she hosts representatives from more than 100 tech companies, countries and academic institutions on the first day of a U.K.-hosted summit which aims to grapple with one of the biggest challenges of our time — the rise of artificial intelligence.
Working at the fast-paced WWE was “very much like” being at her busy Department for Science, Innovation and Technology (DSIT), Donelan tells POLITICO — somewhat improbably — in an eve-of-summit interview at her sparsely-decorated office on Whitehall.
The oddball world of commercial wrestling was also good training for politics.
“It was an eye-opener to different personalities, and how to deal with those different personalities,” she says — ideal for “dealing with big egos, in terms of British politics.”
A low-profile Tory MP who only bagged her first junior ministerial job in 2019, Donelan makes for a surprising compère for the first day of Rishi Sunak’s much-hyped AI summit.
Unlike Sunak, the 39-year-old was no self-professed tech geek when she was entrusted with setting up his new science and technology department in February 2023. By her own admission she doesn’t regularly use generative AI tools like ChatGPT.
But Donelan, who was pregnant with her first child when she was handed the science and tech brief, has been wading through piles of binders detailing technical information as she tries to get to grips with the subject. Colleagues note admiringly (and sometimes despairingly) how she operates on just a few hours sleep.
“I think my journey on this has been a deeper understanding of … just how vital it is that we do lead in this, that we aren’t passive, that we don’t wait for others,” she says.
Summit going on
Since February, Donelan has been laying the groundwork for a summit Sunak hopes will be one of the defining moments of his premiership, with the objective of convincing world leaders to agree on the risks posed by AI.
She, like the PM, is concerned about the potential disruption artificial intelligence could pose. “The risks are very daunting, there’s no denying that,” she says, while acknowledging “there is a debate about whether they will materialize or not.”
Her critics say the summit is wrongly focused on long-term risk, however, and argue not enough is being done to tackle AI’s more immediate threats.
The U.K. is “way behind” in terms of bringing forward actual legislation, said Peter Kyle, Donelan’s opposite number in the Labour Party, who has not been invited to this week’s summit. Donelan’s department has not yet even published a response to its own consultation on an artificial intelligence white paper published way back in March, he pointed out.
Donelan insists the summit is “only part” of the U.K.’s work on artificial intelligence, however and that it plans to say more about the white paper — a first step toward legislation — “by the end of the year.”
“We’re not afraid to legislate. There will have to be legislation in this space eventually,” she says.
But specifics are thin on the ground. She refuses to be drawn on “arbitrary timelines.”
Surviving the hospital pass
It was Donelan’s embrace of the government’s controversial Online Safety Bill, which she inherited in her previous ministerial role during the short-lived premiership of Liz Truss, which attracted the attention of Sunak.
In the hard-fought Tory leadership campaign of July and August 2022, Truss and Sunak both promised to scrap parts of the bill focused on policing “legal but harmful” online content. It was Donelan, appointed as culture secretary by Truss, who was left to unravel those pledges.
Her “no-nonsense” and “methodical” approach to the bill, and her willingness to take the views of her MP colleagues seriously, impressed Sunak when he arrived in No. 10 following Truss’ self-destruction.
For that reason he kept her in post — and then chose her to set up the new department for science and technology earlier this year, according to a No. 10 official closely involved with that decision, granted anonymity to discuss internal government business.
“I think Rishi, like me, can see that she is one of those effective secretaries of state that will deliver outcomes,” said former Education Secretary Nadhim Zahawi, whom Donelan worked alongside prior to her promotion to Cabinet.
Finally getting the Online Safety Bill into law was a notable achievement. Donelan’s previous claim to fame had been her unwanted record of being the shortest-serving Cabinet minister in British history. She took the job of education secretary, and then resigned 35 hours later, in the chaotic final days of the Boris Johnson administration.
Child protection
Donelan’s resolve to get the bill through parliament had been hardened by a one-to-one meeting with campaigner Ian Russell last November. His daughter Molly took her own life after viewing suicide content online.
Donelan has kept the dossier of Molly’s posts handed to her by Russell at that private meeting, according to one U.K. government official. “From that [meeting] she was more determined to do something on child protection,” they said.
“It was heart-wrenching to hear his story, and those of other bereaved parents and I felt very passionately that we had an opportunity to really make a difference on this and to and to change the nature in which we regulate the online world,” Donelan says.
Her approach was strikingly different to the long line of Tory ministers who preceded her. Her willingness to simply pick up the phone to relevant business leaders — often bypassing official government channels — has won her admirers in the exasperated U.K. tech industry, which has endured a succession of different ministers overseeing a bill plagued by uncertainty.
“It was a complete breath of fresh air when she came in,” said Dom Hallas, executive director of tech lobbying outfit the Startup Coalition. “At industry roundtables she is to the point and well-briefed, but she is also frank when something is not going to happen.”
“She actually gets things done, which I would contrast with the previous [Boris Johnson-led] regime. She does listen and seems interested in trying to find out what various stakeholders think about things,” Julian David, chief executive of industry body TechUK, added.
Donelan feels she has skin in the game. Her son was born in the spring, and the tech secretary says the new online laws make her “a lot more confident in his use of social media, when he’s old enough.”
Donelan confirms, however, that being handed a new government department, while heavily pregnant, and about to take maternity leave, was no small challenge.
“I’m not going to lie. It’s a lot harder than I thought it was going to be. Before you have a child you don’t appreciate you are going to have things like ‘Mum guilt’,” she says. “It was easier in my head and harder in reality.”
The long game
Donelan’s unshowy style belies a burning ambition, according to multiple MPs and officials who have tracked her career to date.
In 1999, aged just 15, she spoke at the Conservative Party Conference in Blackpool. She was just 26 when she first stood for election, as a no-hoper in the safe Labour seat of Wentworth and Dearne in 2010.
Three years later she became the Conservative candidate for the Lib Dem held seat of Chippenham — going on to overturn a 2,470 Lib Dem majority in the 2015 general election.
On arriving in parliament, Donelan’s ambition was obvious to colleagues. One recalls her immediately asking for advice on how to climb the career ladder.
Soon after she took her first step up, as a parliamentary private secretary — a lowly unpaid aide to a minister — the Conservative whips’ office created a leaderboard tallying the workrate of the 40-odd MPs holding similar roles. Donelan led the way, smashing every target by a significant margin, one minister said.
“If she’s given a task she will attack it like nothing else. I’m not so sure about the bigger picture stuff — wider strategizing and setting a direction herself. But give her a direction and she’ll go at it,” the same minister said.
In her private life, Donelan is a committed Christian who shies away from the darker side of politics. She is “extremely respectful of Cabinet colleagues,” another former government official who worked with her said. “She doesn’t seem to be involved in backdoor skulduggery. It is all very earnest, but it is working for her in a way that is quite refreshing.”
Yet she raised eyebrows at the Conservative Party conference in October with a main stage speech clearly designed to please the grassroots and capture a few right-wing headlines. Donelan vowed a crackdown on the “creeping wokeism” she claimed is threatening scientific research — and went viral for all the wrong reasons.
A difficult interview with the BBC’s Victoria Derbyshire at the same conference also landed her less-than-positive headlines.
For an ambitious minister looking to wrestle her way onto the world stage this week, these are nothing more than hazards of the job.
BLETCHLEY PARK, England — The United States and China joined global leaders to sign a 27-country agreement on the risk of AI that launched a two-day AI Safety Summit.
In a major diplomatic coup for the British hosts, U.S. Commerce Secretary Gina Raimondo took the stage on Wednesday morning alongside Wu Zhaohui, China’s vice minister of science, at the summit at Bletchley Park — a former military installation north of London where British engineers used early forms of computers to break German codes during World War II.
The site — symbolic of what London believes is a similar global need to rein in the potential harms of artificial technology — forms the backdrop for efforts by politicians, tech executives and academics to find new ways to police a technology evolving faster than almost all governments can respond to it.
This week alone, the U.S. government and G7 group of leading Western democracies published separate efforts to regulate artificial intelligence in the form of a White House executive order and voluntary code of conduct, respectively. The EU expects to complete its separate Artificial Intelligence Act by early December and the United Nations’ newly-created AI advisory board will provide its own recommendations by the end of 2023.
“We will compete as nationals. But even as we compete vigorously, we must search for global solutions for global problems,” said Raimondo, who is traveling to the United Kingdom alongside U.S. Vice President Kamala Harris. “The work, of course, does not begin and end with just the U.S. and the U.K. We want to expand information sharing, research, collaboration, and ultimately policy alignment across the globe.”
In a summit communiqué, published Wednesday, 27 countries and the EU signed the so-called Bletchley Park Declaration on AI. The document focuses solely on so-called “frontier AI,” or the latest version of the technology that has become popular via digital services like OpenAI’s ChatGPT.
The signing countries include both China and the U.S. despite the world’s two largest economies battling over everything from technology to geopolitical power. The voluntary statement commits governments to work together toward trustworthy and responsible AI — catchwords for the safe use of the emerging technology.
“China is willing to engage on AI governance for the promotion of all mankind. That’s our objective,” Wu Zhaohui, China’s vice minister of science and technology, told the audience in Bletchley. The official sat on stage next to the U.S.’s Raimondo despite the countries’ ongoing tension.
References to global AI regulation efforts undertaken by international organizations such as the United Nations and the Organisation for Economic Cooperation and Development, which were featured in an earlier draft, did not make it to the final communiqué. Questioned about that in a press briefing, U.K. Digital Minister Michelle Donelan said that the summit “complements and doesn’t cut across the existing processes” unfolding at the international level, and that officials from the U.N. and the OECD see the U.K.’s initiative as “as a missing piece of the [AI regulation] puzzle” as it specifically deals with advanced frontier AI.
The British government announced the next AI Safety Summit will be held in South Korea in May, 2024 and a third event is planned for France by the end of next year. The U.K. and the U.S. also announced plans to work together on AI Safety Institutes, which are expected to exchange analyses.
Věra Jourová, the EU’s digital chief, welcomed the renewed efforts to rein in potential risks associated with the most advanced systems of artificial intelligence. The 27-country bloc has been working on its own AI legislation for the last three years. But the Czech politician acknowledged much had changed over that time period when it came to what AI systems could now do.
“We have a common obligation for doing this right,” Jourová told the British audience Wednesday in reference to global efforts to set guardrails for the emerging technology. “The future will ask us if we did the right thing at the right moment.”
Scientists at the University of California – Riverside have engineered plant biosensors that change color in the presence of specific chemicals.
Someday, the greenery decorating our homes and gardens might soon be ornamental and an environmental watchdog. (Of course, plants are already good indicators of their surroundings since they tend to wilt or die when things get toxic.)
Innovative Plant Biosensors
It all started with a question: What if a simple house plant could alert you about contaminants in your water? Delving deep into this concept, the UC Riverside team made it a reality. In the presence of a banned, toxic pesticide known as azinphos-ethyl, the engineered plant astonishingly turns a shade of beet red. This development offers a visually compelling way to indicate the presence of harmful substances around us.
Ian Wheeldon, an associate professor of chemical and environmental engineering at UCR, emphasized the groundbreaking nature of this achievement. “In our approach, we ensured the plant’s natural metabolism remains unaffected,” he explained. “Unlike earlier attempts where the biosensor component would hinder the plant’s growth or water absorption during stress, our method doesn’t disrupt these essential processes.”
The team’s findings, elaborated in a paper published in Nature Chemical Biology, unveiled the secret behind this transformative process. At the heart of the operation lies a protein known as abscisic acid (ABA). Under stressful conditions like droughts, plants produce ABA, signaling them to conserve water and prevent wilting. The research team unlocked the potential of ABA receptors, training them to latch onto other chemicals besides ABA. When these receptors bind to specific contaminants, the plant undergoes a color change.
From Plant to Yeast: Expanding the Biosensor Spectrum
The UC Riverside team didn’t just stop at plants. They expanded their research horizon to include yeast, turning this organism into a chemical sensor. Remarkably, yeast exhibited the capability to respond to two distinct chemicals simultaneously, a feat yet to be achieved in plants.
Sean Cutler, UCR professor of plant cell biology, highlighted the team’s vision. “Imagine a plant that can detect up to 100 banned pesticides,” he said. “The potential applications, especially in environmental health and defense, are immense. However, there’s a long way to go before we can unlock such extensive sensing capabilities.”
The Path Forward for Plant Biosensors
While the initial results are promising, commercial growth of these engineered plants isn’t on the immediate horizon. Stringent regulatory approvals, which could span years, are a significant hurdle. Moreover, as a nascent technology, there are numerous challenges to overcome before it finds a place in real-world applications, like farming.
Yet, the future looks bright. “The potential extends beyond just pesticides,” Cutler added. “We aim to detect any environmental chemical, including common drugs that sometimes seep into our water supplies. The technology to sense these contaminants is now within reach.”
A survey, released this week by the cannabis wellness company EO Care, found that “18 percent of respondents have used cannabis for health reasons in the past year, 19 percent have used cannabis for recreational reasons, and 14 percent have used it for both.”
It also revealed that the “top three reasons for their cannabis use are anxiety, pain and sleep. 88 percent of medical cannabis users say it reduced their use of prescription drugs, alcohol, or both,” and that “51 percent said they would be likely/very likely to use cannabis if it were offered by their health plan.”
But perhaps most notable was the finding that “65 percent of respondents said they would feel more comfortable using cannabis if it were screened and dosed by a clinician.”
Sean Collins, co-founder and CEO of EO Care, said that the survey highlights the need for readily available medical advice on marijuana treatment.
“Finding clinical guidance for medicinal cannabis is difficult because most doctors lack the knowledge and retail dispensaries are not equipped to provide medical advice,” Collins said in a press release. “As a result we have tens of millions of Americans using cannabis for health reasons without guidance on specific product recommendations, dosage amounts, possible drug interactions, or consideration of their health history and other potential health risks. Given that sales of cannabis for health reasons is far higher than most prescription drugs, this is a highly concerning situation for healthcare generally.”
EO Care said that the survey was based on responses of 1,027 Americans who are “employed at least part-time and were from US states where cannabis is legal for medical and/or recreational use.”
“94 percent of Americans live in a state where cannabis is legal in some form,” added Collins. “And we know a large percentage of Americans have used cannabis in the past year, so this is definitely impacting employees and health outcomes. With the right medicinal cannabis guidance employers have an opportunity to help their employees, improve health outcomes and be progressive leaders in offering this important benefit that employees will come to expect.”
Thirty-eight states have legalized some form of medical cannabis treatment, and polls routinely show that broad swaths of the country are in favor of making it legally available.
That trend holds true even in states where cannabis remains illegal. A poll released earlier this year found that 76% of adults in South Carolina are in favor of legal medical cannabis. Both recreational and medical marijuana are illegal in the state.
Last year, a survey from the Pew Research Center showed that an “overwhelming share of U.S. adults (88%) say either that marijuana should be legal for medical and recreational use by adults (59%) or that it should be legal for medical use only (30%).”
“With a growing number of states authorizing the use of marijuana, the public continues to broadly favor legalization of the drug for medical and recreational purposes…Over the long term, there has been a steep rise in public support for marijuana legalization, as measured by a separate Gallup survey question that asks whether the use of marijuana should be made legal – without specifying whether it would be legalized for recreational or medical use. This year, 68% of adults say marijuana should be legal, matching the record-high support for legalization Gallup found in 2021,” Pew wrote in its analysis.
“There continue to be sizable age and partisan differences in Americans’ views about marijuana. While very small shares of adults of any age are completely opposed to the legalization of the drug, older adults are far less likely than younger ones to favor legalizing it for recreational purposes.”
The survey from EO Care, which was released on Tuesday, also found that “56 percent of respondents said they would be more likely to take a job at a company whose health plan offered cannabis care,” and that “44 percent would reconsider applying for a job at a company that tested for prior use of cannabis use or prohibited cannabis outside of the workplace.”
EO Care bills itself as “the first clinically guided cannabis health and wellness solution for employers,” saying that its “digital health service gives HR and benefits leaders the necessary tools to help employees determine if cannabis should be part of their healthcare journey or not by providing clinical education and personalized care guidance – including cannabis overuse, which is increasingly common given the lack of medical guidance.”
“Built on data from leading cannabis clinicians and researchers, EO Care provides clinician guidance and proprietary data models to help employers tackle unguided cannabis use and give employees an effective option for relief in cancer treatment, pain management, opioid replacement, anxiety, and sleep management. The company is led by a team of experts in CX healthcare, biotech and data intelligence,” this week’s press release read.
BRUSSELS — On a warm overcast afternoon in late September, Brussels’ digerati streamed into a cramped event space, just moments from the headquarters of the European Commission, to listen to the U.K.’s man of the hour.
Blonde and natty in a crisp white shirt and slim-fit navy suit, Matt Clifford — the British Prime Minister’s official representative for this week’s AI Safety Summit — ambled to the lectern with the smiling ease of someone who has delivered dozens of impromptu speeches.
The event, invitation-only and held under the Chatham House Rule, was just one leg of Clifford’s globetrotting, which has taken him from London to Washington and Beijing. These days, as he told POLITICO, he “can sleep anytime, anywhere.”
Clifford has been weaving across the planet to talk to top policymakers and tech barons about this week’s Bletchley Park summit, which will focus on severe risks like AI-aided cyberattacks and weapon design, and on which Rishi Sunak has pinned his hopes for a legacy. Many tech CEOs have known Clifford for years; presidents and prime ministers had better get up to speed.
A venture capitalist, chairman of the U.K.’s moonshot factory Advanced Research and Invention Agency (ARIA), and now an AI diplomat, 37-year-old Clifford has become one of the most influential people in British tech — just as post-Brexit U.K. scrambles to become a global beacon of AI rulemaking.
The politician’s techie
Clifford’s rise neatly maps onto the parabola of the U.K.’s tech industry: from curio, to jewel in the crown, to geopolitical tool. His debut came in 2011, just as then Prime Minister David Cameron was hitching his wagon to London’s burgeoning startup scene – dubbed the Silicon Roundabout.
A McKinsey consultant with degrees from Cambridge (medieval history) and MIT (computational statistics) Clifford yearned for a change, and a colleague handed him a report McKinsey had just published on the Roundabout recommending investment in nurturing tech founders.
Clifford jumped at the opportunity. He had grown up in Bradford — a northern English city scarred by deindustrialization — and taught himself to code because, he said, he “didn’t want to work in [fast food chain] Gregg’s.”
Together with fellow consultant Alice Bentinck, he founded Entrepreneur First (EF), an accelerator that invests in graduates to help them launch startups. EF would go on to build some of the U.K.’s most successful tech unicorns.
It also gave the duo an into attend the monthly breakfasts Cameron held in No. 10 with London’s tech grandees.
Clifford’s affability has helped him develop a network spanning from European startuppers to Silicon Valley heavy-hitters — LinkedIn co-founder Reid Hoffman, an ex-board member of OpenAI, sits on EF’s board and prefaced Bentinck’s and Clifford’s 2022 book “How to Be a Founder”.
“Matt is a Swiss Army knife type,” said Dom Hallas, head of British lobbying group Startup Coalition. “But he’s also just, like, a really nice guy.”
Alice Bentnick said that Clifford uses ChatGPT to write “storybooks” for his kids | Marco Bertorello/AFP via Getty Images
Bentnick, his EF co-founder, said that Clifford thinks up murder mystery games for colleagues to solve, and that he uses ChatGPT to write “storybooks” for his kids.
During the pandemic, as the British tech industry teetered on the brink, Clifford worked with Hallas and others to convince the Treasury to launch an emergency £250 million startup fund. “Whether it’s regulation, incentives, the crisis moments of the pandemic or the collapse of Silicon Valley Bank, Matt has been critical for facing those challenges,” Hallas said.
Clifford became the politician’s techie and the techie’s policy wonk. “He has cachet. He is very valued in the British tech community — which is in a way also why he’s valued by political people,” said Benedict Macon-Cooney, a chief strategist at the Tony Blair Institute for Global Change. But he is still a techie at heart. Clifford has taken a sabbatical from EF and plans to return after his summit work is wrapped up.
Building a British DARPA
After Boris Johnson triumphed in the U.K.’s 2019 general election, with tech-savvy enforcer Dominic Cummings in tow, Clifford started devoting more and more issues of his weekly newsletter, Thoughts in Between, to the subject of funding advanced science research.
He also launched a reading club focused on initiatives such as the Manhattan Project and the 1969 Apollo 11 moon landing that managed to “achieve exceptional collective output.” That was hardly by chance: Cummings (whose blog was included in the reading group’s syllabus) had made no mystery of his grand plan to create a “British DARPA” devoted to funding ambitious science projects, and it looked like he would get his way.
When the Advanced Research and Invention Agency (ARIA) was finally announced in 2021, Clifford would have had an easy case to make in his application for the chairmanship, to which he was appointed in July 2022: not only he had invested in technology companies for a decade, but he had written extensively about how exactly the research agency should work. [Full disclosure: Clifford also wrote about ARIA in a WIRED op-ed that I commissioned as an editor back in 2020].
“Most of my policy work came out of that newsletter,” Clifford said. “It had three main topics: geopolitics of technology, AI, and science funding and accelerating – all my ARIA conversations originally came out of writing, week in week out, about it.”
Writing about AI in the newsletter, which was well read amongboth techies and policymakers, might also have bolstered Clifford’s credentials for his current unpaid work on the summit. Likely, so did the fact he is on first-name terms with many Silicon Valley technologists building advanced AI systems. In late summer 2022, some six months before OpenAI launched its most powerful model, GPT-4, Clifford was offered an early demo that left him “mind blown.” (He declined to say exactly how he got the demo).
Clifford is enthusiastic about AI’s advantages, from better medicine to more efficient public services. But to reap those, he thinks, you first need to get the people on board — hence the summit.
“AI is not very popular with the public,” he said. “Therefore talking about safety is not to scare the public: it’s actually to reassure them so that we can capture the benefit.”
The summit’s own focus on tail risks, rather than present concerns such as AI-fuelled bias and disinformation, has sparked speculation that its agenda is inspired by effective altruism, a strand of utilitarianism popular in elite universities and Silicon Valley, some of whose adherents worry about evil, almighty AIs’ potential to kill off humankind.
Clifford does not count himself as an effective altruist, although he seems generally sympathetic to their cause, going as far as speaking at a global effective altruism conference in June. “I have a lot of respect for a lot of [effective altruists and their] work but I’ve always been too much of a virtue ethicist to go all-in,” he said. Indeed, during his talk at the effective altruism event, he recommended that attendees read “After Virtue” by Alasdair MacIntyre — a thinker whose worldview is hardly utilitarian.
Despite once being an “ardent remainer” and Sunak being a Brexiteer, Clifford and the PM enjoy a good rapport | Pool photo by Peter Nicholls via AFP/Getty Images
He pushes back on the idea that the summit has been captured by the “doomer narrative” espoused by some effective altruists. “Talking of killer robots — I don’t think that’s helpful at all,” he said. “[The summit] is much more about how we avoid a misuse that turns the public so much against AI that you get a chilling effect on adoption?”
Not a ‘political animal’
The call from No. 10 asking Clifford to help with the summit came at the end of a long stretch of AI-related work. In late 2022, he helped conduct a government review of emerging technologies where the U.K. could have a crack at setting standards: Clifford put special emphasis on AI, which seems to have influenced Sunak’s thinking.
A few weeks later, in March 2023, he was appointed to help build the U.K.’s task force focused on advanced AI, or frontier models, and in May he orchestrated the meeting between Rishi Sunak and the CEOs of AI labs OpenAI, DeepMind and Anthropic, all of which are now on the summit’s invitation list.
Despite once being an “ardent remainer” and Sunak being a Brexiteer, Clifford and the PM enjoy a good rapport, which Hallas said first became apparent when the two were on stage together at Treasury Connect, a conference then-Chancellor Sunak organized in 2021.
Politics rarely seems to factor into Clifford’s actions. “I’m not really a political animal,” he said. “My entire career I’ve been thinking about how to use technology as a source of leverage to make the world better.”
But over the past few years, and especially over the past few weeks, he has learned how to talk to politicians, and to win them over. “Politicians value that — being a successful entrepreneur, being a successful investor — I know what it takes to make technology work for people,” he said. “My starting point is: how do we get things done?”
In the dynamic world of soccer, goalkeepers have always been seen as outliers. While they defend their posts, these players face the arduous task of making quick decisions under pressure, often with fragmented information. New research sheds light on the exceptional way goalkeepers perceive their surroundings, revealing significant differences in their multisensory processing capabilities.
Enhanced Multi-Sensory Processing of Soccer Goalkeepers
Michael Quinn from Dublin City University, himself a former professional goalkeeper, embarked on this study to validate a longstanding soccer belief. He, alongside his team, found that, unlike other players, goalkeepers have an intrinsic knack for making swift decisions. This is the case even when faced with limited sensory data. It’s not just a feeling within the soccer community; now, there’s scientific evidence supporting the notion that goalkeepers genuinely “see” the world differently.
In an innovative approach, Quinn and his team examined temporal binding windows among professional goalkeepers, outfield soccer players, and those who don’t play soccer. This window represents the time frame within which individuals combine sensory data from various sources.
A Deep Dive into the Goalkeeper’s Brain
The study had participants discern visual and auditory stimuli that appeared in different sequences and intervals. Interestingly, goalkeepers exhibited a more refined ability to discern these multisensory cues, indicating their superior estimation of timing. This precision stands in stark contrast to outfield players and non-players.
Furthermore, goalkeepers demonstrated less interplay between visual and auditory cues. This suggests they tend to separate sensory information rather than blending them. This unique ability stems from their need to process various cues simultaneously. The trajectory of a ball, combined with the sound it makes when kicked, are essential inputs for a goalkeeper’s split-second decision-making.
Origins and Future Explorations into the Perceptions of Soccer Goalkeepers
While the current findings illuminate the distinct perceptual world of soccer goalkeepers, the cause of these differences remains a mystery. Does intense, specialized training from an early age shape their multisensory processing? Or are inherent abilities leading young players to gravitate toward the goalkeeper position?
David McGovern, the study’s lead investigator, expressed curiosity about other specialized soccer positions. Could strikers or center-backs also exhibit unique perceptual tendencies? The team at Dublin City University aims to unravel these questions in subsequent studies. They will explore the development and influences on a goalkeeper’s extraordinary sensory processing capabilities.
It’s as if one front in the Israel-Hamas war is playing out on the streets of Berlin.
The main battleground has been an avenue lined with chicken and kebab restaurants in Neukölln, a neighborhood in the south-east of the city that’s home to many Middle Eastern immigrants. Some pro-Palestinian activists have called for demonstrators to turn out almost nightly, and, as one post put it, turn the area “into Gaza.”
On October 18, hundreds of people, many of them teenagers, answered the call.
“From the river to the sea, Palestine will be free,” chanted many in the crowd as a phalanx of riot police closed in on them. Berlin public prosecutors say the slogan is a call for the erasure of Israel, and have moved to make its utterance a criminal offense.
While similar scenes have played out across much of the world, for Germany’s leaders, they are profoundly embarrassing and strike at the heart of the nation’s identity, on account of the country’s Nazi past.
Germany’s “history and our responsibility arising from the Holocaust make it our duty to stand up for the existence and security of the State of Israel,” Chancellor Olaf Scholz said during a visit to Israel on October 17 intended to illustrate Germany’s solidarity.
The difficulty for Scholz is that far from everyone in Germany sees it his way.
German leaders across the political spectrum expressed outrage when, after the Hamas’ October 7 terrorist attack on Israeli civilians, dozens of people assembled in Neukölln to celebrate. One 23-year-old man, a Palestinian flag draped over his shoulders, handed out sweets.
A community on edge
Since then, tensions in Berlin and in other German cities have rapidly escalated. A surge in antisemitic incidents has left many in the country’s Jewish community on edge and German police have stepped up security at cultural institutions and houses of worship.
At the same time, German police have moved to ban many pro-Palestinian demonstrations, saying there is a high risk of “incitement to hatred” and a threat to public safety. Demonstrators have come out anyway, leading to violent clashes with police.
Some in Germany, particularly on the political left, have questioned whether the bans on pro-Palestinian protests are an overreach of the state, arguing that they stifle legitimate concerns about civilian casualties in Gaza stemming from Israel’s retaliatory strikes.
But Berlin authorities say, based on past experience, the likelihood of antisemitic rhetoric — even violence — at prohibited pro-Palestinian demonstrations is too high.
Protesters demanding a peaceful resolution to the current conflict in Israel and Gaza demonstrate under the slogan “Not in my name!” in Berlin | Maja Hitij/Getty Images
Many on the far-left have joined those protests that do take place.
On Wednesday night, around the same time demonstrators assembled in Neukölln, a group of a few hundred leftist activists showed up at a planned vigil for peace outside the foreign ministry.
“Free Palestine from German guilt,” they chanted in English. Germany, the argument went, should get over its Holocaust history, at least when it comes to support for Israel. The irony is that there is much sympathy for this view on the far right.
One recent poll showed that 78 percent of supporters of the far-right Alternative for Germany disagreed with the idea that the country has a “special obligation towards Israel.” Extreme-right politicians have also called on Germany to get over its “cult of guilt.”
For many in the country’s Jewish community — which in recent years has grown to an estimated 200,000 people, including many Israelis — the conflagration in the Middle East has made fear part of daily life.
Molotov cocktails
In the pre-dawn hours on Wednesday, two people wearing masks threw Molotov cocktails at a Berlin Jewish community hub that houses a synagogue. The incendiary devices hit the sidewalk, and no one was hurt. But the attack stoked profound alarm.
“Hamas’ ideology of extermination against everything Jewish is also having an effect in Germany,” said the Central Council of Jews in Germany, the country’s largest umbrella Jewish organization.
Since the Israel-Hamas war broke out, several homes in Berlin where Jews are thought to live have been marked with the Star of David.
“My first thought was: ‘It’s like the Nazi time,’” said Sigmount Königsberg, the antisemitism commissioner for Berlin’s Jewish Community, an organization that oversees local synagogues and other parts of Jewish life in the city. “Many Jews are hiding their Jewishness,” he added — in other words, concealing skullcaps or religious insignia out of fear of being attacked.
It remains unclear who perpetrated the firebombing attack and Star of David graffiti. But historical data shows a clear correlation between upsurges in Middle East violence and increased antisemitic incidents in Europe, according to academic researchers.
In the eight days following Hamas’s October 7 attack on Israel, there were 202 antisemitic incidents connected to the war, mostly motivated by “anti-Israel activism,” according to data compiled by the Anti-Semitism Research and Information Center.
Fears within the Jewish community were particularly prevalent after a former Hamas leader called for worldwide demonstrations in a “day of rage.” Many students at a Jewish school in Berlin stayed home. Two teachers wrote a letter to Berlin’s mayor to express their dismay that, as they put it, the school was nearly empty.
A pro-Palestinian demonstrator displays a placard during a protest against the bombing in Gaza outside the Foreign Ministry in Berlin on October 18, 2023 | John Macdougall/AFP via Getty Images
“This means de facto that Jew-haters have usurped the decision-making authority over Jewish life in Berlin,” they wrote. The teachers then blamed Germany’s willingness to take in refugees from war-torn places like Syria and Lebanon. “Germany has taken in and continues to take in hundreds of thousands of people whose socialization includes antisemitism and hatred of Israel,” they wrote.
Day of rage
Surveys show that Muslims in Germany are more likely to hold antisemitic views than the general population. Politicians often refer this phenomenon as “imported antisemitism,” brought into the country through immigration from Muslim-majority nations.
At the same time, it was a far-right attacker who perpetrated some of the worst antisemitic violence in Germany’s recent history. That came in 2019, when a gunmen tried to massacre 51 people celebrating Yom Kippur, the holiest day in Judaism, in a synagogue in the eastern German city of Halle. Two people were killed.
German neo-Nazis have praised Hamas’s October 7 attacks in Israel. One group calling itself the “Young Nationalists” posted a picture of a bloodstained Star of David on social media next to the slogan “Israel murders and the world watches.”
During the Neukölln demonstration, officers arrested individual protestors one by one, picking them out from the crowd and dragging them off by force.
The atmosphere grew increasingly tense. Demonstrators lobbed fireworks and bottles at the police. Dumpsters and tires were set alight. By the end of the night, police made 174 arrests, including 29 minors. Police said 65 officers were injured in the clashes.
At one point amid the chaos, a 15-year-old girl with a Palestinian keffiyeh — a black and white scarf — wrapped around most of her face emerged amid the smoke and explosions to pose for a selfie in front of a row of riot police.
She said she was there to demonstrate for “peace.” When asked how peace would be achieved, she replied: “When the Israeli side pisses off our land, there will be peace. Won’t there?”
There are more benefits of the snooze button than just getting an extra few minutes of sleep.
For many, the snooze button been branded as the ultimate “sleep disruptor.” But new findings from Stockholm University’s Department of Psychology may be about to turn this common belief on its head.
Snoozing: A Maligned Habit?
It’s a widely held belief that tapping that tempting snooze button might be doing us more harm than good. Critics claim it disrupts our sleep patterns, making us groggier and less alert when we eventually rise. But, is there any scientific basis to this belief?
The recent study led by Tina Sundelin of Stockholm University is turning this narrative around. Contrary to popular belief, hitting the snooze button might actually support the waking process for those who regularly find solace in those few extra minutes.
A Deep Dive into the Benefits of the Snooze Button
This comprehensive research involved two phases. The initial study surveyed 1,732 individuals on their morning habits. Findings highlighted that a significant number, especially among young adults and night owls, lean heavily on the snooze function. Their main reason? Feeling overwhelmingly fatigued when the first alarm rings.
The second phase delved deeper. Thirty-one habitual snoozers spent two nights in a sleep lab. On one morning, they had the luxury to snooze for an additional 30 minutes, while the other morning demanded an immediate wake-up call. Results revealed that most participants actually enjoyed more than 20 minutes of additional sleep during the snooze time. This had little impact on the overall quality or duration of their night’s rest.
What Does the Snooze Button Really Do?
Here’s the kicker: not only did the snooze function not disrupt the participants’ sleep, it also ensured no one was jolted awake from deep slumber. Moreover, those who indulged in that extra rest displayed slightly sharper cognitive abilities upon waking. Factors such as mood, overall sleepiness, or cortisol levels in the saliva remained unaffected.
Sundelin points out, “Our findings reveal that a half-hour snooze does not negatively impact night sleep or induce sleep inertia, which is that groggy feeling post-wakeup. In some instances, the results were even favorable. For example, we noticed a reduced chance of participants waking from deep sleep stages.”
While these findings might be a relief for serial snoozers, Sundelin adds a word of caution: “The study primarily focused on individuals who habitually hit the snooze button and can effortlessly drift back to sleep post-alarm. Snoozing might not be a one-size-fits-all solution.”
For those who relish those additional moments of rest in the morning, this research brings good news. Snoozing, at least for regular snoozers, doesn’t seem to steal away the quality of our sleep. On the contrary, it may subtly boost our cognitive processes during the waking stage.
So, the next time your alarm sounds and you’re contemplating another round with the snooze button, remember: You might not be losing out at all by grabbing those few extra minutes of shut-eye.
Novartis said an interim analysis from a phase 3 trial to evaluate its investigational iptacopan drug in patients with kidney disease nephropathy achieved positive results, meeting its primary goal.
The Swiss pharmaceutical company said Monday that an analysis of study data at nine months showed a clinically meaningful and statistically significant reduction in protein in urine. The company said this demonstrated superiority of iptacopan relative to placebo in reducing protein in urine.
The safety profile of the drug was consistent with previously reported data, Novartis said.
Novartis said it plans to review the trial’s interim results with the U.S. Food and Drug Administration to enable a potential regulatory submission for accelerated approval.
The study will now continue to assess the iptacopan’s ability to slow disease progression over two years, the company said. Results from the primary goal at the end of the study are expected in 2025.
Write to Adria Calatayud at adria.calatayud@dowjones.com
Katalin Karikó and Drew Weissman have been awarded the Nobel prize in medicine for their work on messenger RNA technology, which enabled the development ofthe first vaccines against COVID-19.
The Nobel Assembly at Sweden’s Karolinska Institute, which is responsible for selecting the winner of one of science’s most prestigious prizes, said on Monday that the discoveries “were critical for developing effective mRNA vaccines against COVID-19.”
mRNA vaccines work by delivering into the body genetic instructions for building proteins that are present in the virus being immunized against. That spurs cells to create those proteins, which the body then recognizes as foreign and attacks; training the immune system and creating protection against the actual virus.
In the early 1990s, Karikó, from Hungary, was working at the University of Pennsylvania looking at how mRNA could be used in medicine. She was joined in her research by U.S. colleague Weissman, an immunologist specializing in dendritic cells, which are responsible for the body’s immune response during vaccination.
Together, the scientists discovered how to alter mRNA so that it wasn’t immediately detected by the body’s immune system and could deliver its payload to the target cells. Further work by the pairimproved the efficiency of mRNA, so that it stimulated more protein production.
“Through their discoveries that base modifications both reduced inflammatory responses and increased protein production, Karikó and Weissman had eliminated critical obstacles on the way to clinical applications of mRNA,” said the Nobel Assembly.
As well as laying the groundwork for mRNA vaccines, Karikó was employed from 2013 to 2022 at vaccine developer BioNTech, which, together with Pfizer, produced the first COVID-19 vaccine approved in the EU.
Pharma companies are now developing mRNA vaccines and therapies for a swathe of different diseases including flu, tuberculosis, HIV, malaria, Lyme disease, Zika and various types of cancer.
The award comes with a cash prize of 11 million Swedish krona (€950,000). In 1951, Max Theiler won the prize for his work helping discover the vaccine against yellow fever.
Elon Musk, the owner of X (formerly Twitter) said overnight that a global team working on curbing disinformation during elections had been dismissed — a mere two days after being singled out by the EU’s digital chief as the online platform with the most falsehoods.
Responding to reports about cuts, the tech mogul said on X, “Oh you mean the ‘Election Integrity’ Team that was undermining election integrity? Yeah, they’re gone.”
Several Ireland-based staff working on a threat-disruption team — including senior manager Aaron Rodericks — were allegedly fired this week, according to tech media outlet The Information. Rodericks has, however, secured a court order halting disciplinary action over allegedly liking tweets critical of the company, according to Irish media.
Vice President Vera Jourová this week warned that EU-supported research showed that X had become the platform with the largest ratio of posts containing misinformation or disinformation. The company under Musk left the European Commission’s anti-disinformation charter in late May after failing its first test.
Jourová also urged tech companies to prepare for numerous national and European elections in the coming months, especially given the “particularly serious” risk that Russia will seek to meddle in them. Slovakia will hold its parliamentary election on Saturday. Poland, Luxembourg and the Netherlands will also head to the polls in the coming weeks.
X must comply with the EU’s content rules, the Digital Services Act (DSA), which requires large tech platforms with over 45 million EU users to mitigate the risks of disinformation campaigns. Failure to follow the rulebook could lead to sweeping fines of up to 6 percent of companies’ global annual revenue.
LONDON — It was the gleaming smiles and mutual backslapping of two 40-something banker bros which signalled a new era of U.K.-EU relations.
British Prime Minister Rishi Sunak and French President Emmanuel Macron looked like natural bedfellows as they riffed off one another at a friendly Paris press conference in March, announcing a sizeable £478 million package to deter migrant crossings through the English Channel.
The contrast with the petty name-calling of the Boris Johnson and Liz Truss eras was clear to see.
Sunak’s warm and productive summit with Europe’s most high-profile leader confirmed a more collaborative relationship with the EU and its national capitals after the turmoil of the Brexit era. Less than two weeks earlier, the British PM’s landmark Windsor Framework agreement with Brussels had finally resolved post-Brexit trading issues in Northern Ireland.
“My hope is that [theagreement] opens up other areas of constructive engagement and dialogue and cooperation with the EU,” Sunak told POLITICO en route to the Paris summit.
Six months on, his words have been borne out.
In addition to the Windsor Framework and English Channel agreements, Britain has signed a Memorandum of Understanding with Brussels on regulatory cooperation in financial services, and this month rejoined the EU’s massive €96 billion Horizon and Copernicus science research programs — a major result for the U.K.’s research and university sectors after two years of uncertainty.
Next on the agenda is a cooperation deal between the British government and the EU’s border protection agency Frontex — another move that brings Britain closer to the EU in a small but meaningful way.
The deal, confirmed by the Home Secretary Suella Braverman on Tuesday, is expected to be similar to other deals Frontex has with non-EU countries, like Albania, which allow the sharing of data on migration flows.
“We have seen concrete steps created by a new climate of good faith,” said a London-based European diplomat, granted anonymity — like others in this article — to speak candidly about diplomatic relations.
“We missed that before, and so that’s the Sunak effect. I wouldn’t say he’s done an amazing job, but he’s changed the state of mind — and therefore he has changed everything.”
A new hope
In addition to a renewed focus on relations with fellow leaders, Sunak has impressed EU diplomats with his willingness to face down the vocal Brexiteer wing of his own party, which has long seemed — to European eyes — to hold outsized influence over successive Tory prime ministers.
Britain’s Prime Minister Rishi Sunak proclaimed a “new chapter” in post-Brexit relations with the European Union after securing a breakthrough deal to regulate trade in Northern Ireland | Pool photo by Dan Kitwood/AFP via Getty Images
Earlier this year Sunak enraged Tory right-wingers by abandoning a controversial pledge to scrap or rewrite thousands of EU-era regulatory laws which remain on the British statute book by the end of this year, to the delight of EU capitals.
“The improving relationship is built on the fact there’s now a willingness to find solutions and engage in a way that wasn’t there in the previous administrations,” a second London-based European diplomat said.
Negotiations continue between Sunak’s government and Brussels over other outstanding areas of dispute — chief among them tough new tariffs due to be imposed in January on electric vehicles (EVs) being shipped in and out of the U.K. which do not conform to strict sourcing requirements for electric batteries.
On Wednesday the U.K.-EU Trade Specialised Committee will meet to discuss the issue, with British ministers increasingly hopeful Brussels will agree to scrap the end-of-year deadline after heavy lobbying from German automakers and its own European Commissioner for trade, Valdis Dombrovskis.
Catherine Barnard, a European law professor at Cambridge University, said overall Sunak had overseen a “much more positive relationship” with Europe, albeit one conducted on a “pay-as-you-go basis.”
“This is looking much more positive and it’s putting some meaning on dealing with our European neighbors as friends, rather than as foes,” she said.
“But equally, we’re not talking about a comprehensive and thorough renegotiation — quite the contrary.”
No. 10 Downing Street agrees the shift is less profound than some media observers — or grumbling Tory MPs — would like to think.
A No. 10 aide said Sunak sees his diplomatic efforts as “normal government,” noting that “we’ve just forgotten what it looks like” after the turmoil of the post-Brexit era.
“I know it’s following Brexit and all that nonsense we’ve seen over the last few years, and it’s nice to see any small win or small argument to bridge that divide, but this is just normal government relations,” the aide said.
But his opponent, U.K. Labour leader Keir Starmer, has made clear he too wants closer cooperation with Europe should he seize power.
A senior moderate Tory MP said that despite the attacks on Starmer, Sunak is “not overly ideological when it comes to the EU” | Kiran Ridley/Getty Images
Starmer said this month a future Labour government would use the upcoming review of the post-Brexit trade deal, expected in 2025 or 2026, as a chance to reduce border checks through the signing of a veterinary agreement and to increase U.K.-EU mobility for some sectors of the economy.
And he told a conference in Montreal last weekend that that “we don’t want to diverge from the EU” in areas such as working conditions or environmental standards.
These comments were seized upon by Tory ministers as evidence that Starmer would bring the U.K. even further into the EU’s orbit than he has publicly admitted — something the Labour leader denies. Tory campaigners hope to use such comments in campaign attacks painting Starmer as an anti-Brexit europhile.
But some observers suggest such political attacks are ironic, given Sunak’s own direction of travel. Barnard, quoted above, says that “what Keir Starmer was saying in Canada last week is pretty much a description of where we’re at at the moment.”
A senior moderate Tory MP said that despite the attacks on Starmer, Sunak is “not overly ideological when it comes to the EU.”
“There’s always been a belief in Brussels that we would inevitably come crawling back to them, and we’re seeing that a bit now,” they said.
Nevertheless, it is unclear how much closer Britain and the EU can get without a fundamental renegotiation of the terms of Brexit — something all sides insist is off the table.
One area for agreement is the need for enhanced security and defence links, with next year’s European Political Community Summit in Britain providing a potential opportunity for further announcements.
Some in Westminster speculate that this could come in the form of Britain joining individual projects of the EU’s Permanent Structured Cooperation — a body which coordinates the bloc’s security and defence policy. The European Council invited Britain to join its “military mobility project” alongside Canada, Norway and the U.S. in November 2022.
Anand Menon, director of the UK in a Changing Europe think tank , said he’s “not convinced” of the potential benefits for Britain, considering the U.K.’s existing position in NATO and other organizations.
He believes the British government will run out of road in finding mutually beneficial areas of cooperation with Brussels.
“The EU is relatively happy with the status quo,” Menon said. “It’s only in the U.K. where people say we need to move closer … There are so many bigger fish to fry for the EU.”
LONDON — Back in the spring, Britain was sounding pretty relaxed about the rise of AI. Then something changed.
The country’s artificial intelligence white paper — unveiled in March — dealt with the “existential risks” of the fledgling tech in just four words: high impact, low probability.
Less than six months later, Prime Minister Rishi Sunak seems newly troubled by runaway AI. He has announced an international AI Safety Summit, referred to “existential risk” in speeches, and set up an AI safety taskforce with big global aspirations.
Helping to drive this shift in focus is a chorus of AI Cassandras associated with a controversial ideology popular in Silicon Valley.
Known as “Effective Altruism,” the movement was conceived in the ancient colleges of Oxford University, bankrolled by the Silicon Valley elite, and is increasingly influential on the U.K.’s positioning on AI.
Not everyone’s convinced it’s the right approach, however, and there’s mounting concern Britain runs the risk of regulatory capture.
The race to ‘God-like AI’
Effective altruists claim that super-intelligent AI could one day destroy humanity, and advocate policy that’s focused on the distant future rather than the here-and-now. Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs.
“The view is that the outcome of artificial super-intelligence will be binary,” says Émile P. Torres, philosopher and former EA, turned critic of the movement. “That if it’s not utopia, it’s annihilation.”
In the U.K., key government advisers sympathetic to the movement’s concerns, combined with Sunak’s close contact with leaders of the AI labs – which have longstanding ties to the movement – have helped push “existential risk” right up the U.K.’s policy agenda.
When ChatGPT-mania reached its zenith in April, tech investor Ian Hogarth penned a viral Financial Times article warning that the race to “God-like AI” “could usher in the obsolescence or destruction of the human race” – urging policymakers and AI developers to pump the brakes.
It echoed the influential “AI pause” letter calling for a moratorium on “giant AI experiments,” and, in combination with a later letter saying AI posed an extinction risk, helped fuel a frenzied media cycle that prompted Sunak to issue a statement claiming he was “looking very carefully” at this class of risks.
Known as “Effective Altruism,” the movement was conceived in the ancient colleges of Oxford University, bankrolled by the Silicon Valley elite, and is increasingly influential on the U.K.’s positioning on AI | Carl Court/Getty Images
“These kinds of arguments around existential risk or the idea that AI would develop super-intelligence, that was very much on the fringes of credible discussion,” says Mhairi Aitken, an AI ethics researcher at the Alan Turing Institute. “That’s really dramatically shifted in the last six months.”
The EA community credited Hogarth’s FT article with telegraphing these ideas to a mainstream audience, and hailed his appointment as chair of the U.K.’s Foundation Model Taskforce as a significant moment.
Under Hogarth, who has previously invested in AI labs Anthropic, Faculty, Helsing, and AI safety firm Conjecture, the taskforce announced a new set of partners last week – a number of whom have ties to EA.
Three of the four partner organizations on the lineup are bankrolled by EA donors. The Centre for AI Safety is the organization behind the “AI extinction risk” letter (the “AI pause” letter was penned by another EA-linked organization, the Future of Life Institute). Its primary funding – to the tune of $5.2 million – comes from major EA donor organization, Open Philanthropy.
Another partner is Arc Evals, which “works on assessing whether cutting-edge AI systems could pose catastrophic risks to civilization.”
It’s a project of the Alignment Research Centre, an organization that has received $1.5 million from Open Philanthropy, $1.25 million from high-profile EA Sam Bankman-Fried’s FTX Foundation (which it promised to return after the implosion of his crypto empire), and $3.25 million from the Survival and Flourishing Fund, set up by Skype founder and prominent EA, Jaan Tallinn. Arc Evals is advised by Open Philanthropy CEO, Harold Karnofsky.
Finally, the Community Intelligence Project, a body working on new governance models for transformative technology, began life with an FTX regrant, and a co-founder appealed to the EA community for funding and expertise this year.
Joining the taskforce as one of two researchers is Cambridge professor David Krueger, who has received a $1 million grant from Open Philanthropy to further his work to “reduce the risk of human extinction resulting from out-of-control AI systems”. He describes himself as “EA-adjacent.” One of the PhD students Kruger advises, Nitarshan Rajkumar, has been working with the British government’s Department for Science, Innovation and Technology (DSIT) as an AI policy adviser since April.
A range of national security figures and renowned computer scientist, Yoshua Bengio, are also joining the taskforce as advisers.
Combined with its rebranding as a “Frontier AI Taskforce” which projects its gaze into the future of AI development, the announcements confirmed the ascendancy of existential risk on the U.K.’s AI agenda.
‘X-risk’
Hogarth told the FT that biosecurity risks – like AI systems designing novel viruses – and AI-powered cyber-attacks weigh heavily on his mind.The taskforce is intended to address these threats, and to help build safe and reliable “frontier” AI models.
When ChatGPT-mania reached its zenith in April, tech investor Ian Hogarth penned a viral Financial Times article warning that the race to “God-like AI” “could usher in the obsolescence or destruction of the human race” | John Phillips/Getty Images
“The focus of the Frontier AI Taskforce and the U.K.’s broader AI strategy extends to not only managing risk, but ensuring the technology’s benefits can be harnessed and its opportunities realized across society,” said a government spokesperson, who disputed the influence of EA on its AI policy.
But some researchers worry that the more prosaic threats posed by today’s AI models, like bias, data privacy, and copyright issues, have been downgraded. It’s “a really dangerous distraction from the discussions we need to be having around regulation of AI,” says Aitken. “It takes a lot of the focus away from the very real and ethical risks and harms that AI presents today.”
The EA movement’s links to Silicon Valley also prompt some to question its objectivity. The three most prominent AI labs, OpenAI, DeepMind and Anthropic, all boast EA connections – with traces of the movement variously imprinted on their ethos, ideology and wallets.
Tech mogul Elon Musk claims to be a fan of the closely related “longtermist” ideology, calling it a “close match” to his own. Musk recently hired Dan Hendrycks, director of Center for AI Safety, as an adviser to his new start-up, xAI, which is also doing its part to prevent the AI apocalypse.
To counter the threat, the EA movement is throwing its financial heft behind the field of AI safety. Head of Open Philanthropy, Harold Karnofsky,wrote a February blog post announcing a leave of absence to devote himself to the field, while an EA career advice center, 80,000 hours, recommends “AI safety technical research” and “shaping future governance of AI” as the two top careers for EAs.
Tech mogul Elon Musk claims to be a fan of the closely related “longtermist” ideology, calling it a “close match” to his own | Dimitrios Kambouris/Getty Images for The Met Museum/Vogue
Trading in an insular jargon of “X-risk” (existential risks) and “p(doom)” (the probability of our impending annihilation), the AI-focused branch of effective altruism is fixated on issues like “alignment” – how closely AI models are attuned to humanity’s value systems – amid doom-laden warnings about “proliferation” – the unchecked propagation of dangerous AI.
Despite its popularity among a cohort of technologists,critics say the movement’s thinking lacks evidence and is alarmist. A vocal critic, former Googler Timnit Gebru, has denounced this “dangerous brand of AI safety,” noting that she’d seen the movement gain “alarming levels of influence” in Silicon Valley.
Meanwhile, the “strong intermingling” of EAs and companies building AI “has led…this branch of the community to be very subservient to the AI companies,” says Andrea Miotti, head of strategy and governance at AI safety firm Conjecture. He calls this a “real regulatory capture story.”
The pitch to industry
Citing the Center for AI Safety’s extinction risk letter, Hogarth called on AI specialists and safety researchers to join the taskforce’s efforts in June, noting that at “a pivotal moment, Rishi Sunak has stepped up and is playing a global leadership role.”
On stage at the Tony Blair Institute conference in July, Hogarth – perspiring in the midsummer heat but speaking with composed conviction – struck an optimistic note. “We want to build stuff that allows for the U.K. to really have the state capacity to, like, engineer the future here,” he said.
Although the taskforce was initially intended to build up sovereign AI capability, Hogarth’s arrival saw a new emphasis on AI safety. The U.K. government’s £100 million commitment is “the largest amount ever committed to this field by a nation state,” he tweeted.
Despite its popularity among a cohort of technologists,critics say the movement’s thinking lacks evidence and is alarmist | Hollie Adams/Getty Images
The taskforce recruitment ad was shared on the Effective Altruism forum, and Hogarth’s appointment was announced in Effective Altruism UK’s July newsletter.
Hogarth is not the only one in government who appears to be sympathetic to the EA movement’s arguments. Matt Clifford, chair of government R&D body, ARIA, and adviser to the AI taskforce as well as AI sherpa for the safety summit, has urged EAs to jump aboard the government’s latest AI safety push.
“I would encourage any of you who care about AI safety to explore opportunities to join or be seconded into government, because there is just a huge gap of knowledge and context on both sides,” he said at the Effective Altruism Global conference in London in June.
“Most people engaged in policy are not familiar … with arguments that would be familiar to most people in this room about risk and safety,” he added, but cautioned that hyping apocalyptic risks was not typically an effective strategy when it came to dealing with policymakers.
Clifford said that ARIA would soon announce directors who will be in charge of grant-giving across different areas. “When you see them, you will see there is actually a pretty good overlap with some prominent EA cause areas,” he told the crowd.
A British government spokesperson said Clifford is “not part of the core Effective Altruism movement.”
Civil service ties
Influential civil servants also have EA ties. Supporting the work of the AI taskforce is Chiara Gerosa, who in addition to her government work is facilitating an introductory AI safety course “for a cohort of policy professionals” for BlueDot Impact, an organization funded by Effective Ventures, a philanthropic fund that supports EA causes.
The course “will get you up to speed on extreme risks from AI and governance approaches to mitigating these risks,” according to the website, which states alumni have gone on to work for the likes of OpenAI, GovAI, Anthropic, and DeepMind.
People close to the EA movement say that its disciples see the U.K.’s AI safety push as encouragement to get involved and help nudge policy along an EA trajectory.
EAs are “scrambling to be part of Rishi Sunak’s announced Foundation Model Taskforce and safety conference,” according to an AI safety researcher who asked not to be named as they didn’t want to risk jeopardizing EA connections.
EAs are “scrambling to be part of Rishi Sunak’s announced Foundation Model Taskforce and safety conference,” according to an AI safety researcher | Pool photo by Justin Tallis via AFP/Getty Images
“One said that while Rishi is not the ‘optimal’ candidate, at least he knows X-risk,” they said. “And that ‘we’ need political buy-in and policy.”
“The foundation model taskforce is really centring the voices of the private sector, of industry … and that in many cases overlaps with membership of the Effective Altruism movement,” says Aitken. “That to me, is very worrying … it should really be centring the voices of impacted communities, it should be centring the voices of civil society.”
Jack Stilgoe, policy co-lead of Responsible AI, a body funded by the U.K.’s R&D funding agency, is concerned about “the diversity of the taskforce.” “If the agenda of the taskforce somehow gets captured by a narrow range of interests, then that would be really, really bad,” he says, adding that the concept of alignment “offers a false solution to an imaginary problem.”
A spokesperson for Open Philanthropy, Michael Levine, disputed that the EA movement carried any water for AI firms. “Since before the current crop of AI labs existed, people inspired by effective altruism were calling out the threats of AI and the need for research and policies to reduce these risks; many of our grantees are now supporting strong regulation of AI over objections from industry players.”
From Oxford to Whitehall, via Silicon Valley
Birthed at Oxford University by rationalist utilitarian philosopher William MacAskill, EA began life as a technocratic preoccupation with how charitable donations could be optimized to wring out maximal benefit for causes like global poverty and animal welfare.
Over time, it fused with transhumanist and techno-utopian ideals popular in Silicon Valley, and a mutated version called “long-termism” that is fixated on ultra-long-term timeframes now dominates. MacAskill’s most recent book What We Owe the Future conceptualizes a million-year timeframe for humanity and advocates the colonization of space.
EA began life as a technocratic preoccupation with how charitable donations could be optimized to wring out maximal benefit for causes like global poverty and animal welfare. Over time, it fused with transhumanist and techno-utopian ideals popular in Silicon Valley | Mason Trinca/Getty Images
Oxford University remains an ideological hub for the movement, and has spawned a thriving network of think tanks and research institutes that lobby the government on long-term or existential risks, including the Centre for the Governance of AI (GovAI) and the Future of Humanity Institute at Oxford University.
Other EA-linked organizations include Cambridge University’s Centre for the Study of Existential Risk, which was co-founded by Tallinn and receives funding from his Survival and Flourishing Fund – which is also the primary funder of the Centre for Long Term Resilience, set up by former civil servants in 2020.
The think tanks tend to overlap with leading AI labs, both in terms of membership and policy positions. For example, the founder and former director of GovAI, Allan Dafoe, who remains chair of the advisory board, is also head of long-term AI strategy and governance at DeepMind.
“We are conscious that dual roles of this form warrant careful attention to conflicts of interest,” reads the GovAI website.
GovAI, OpenAI and Anthropic declined to offer comment for this piece. A Google DeepMind spokesperson said: “We are focused on advancing safe and responsible AI.”
The movement has been accruing political capital in the U.K. for some time, says Luke Kemp, a research affiliate at the Centre for the Study of Existential Risk who doesn’t identify as EA. “There’s definitely been a push to place people directly out of existential risk bodies into policymaking positions,” he says.
The movement has been accruing political capital in the U.K. for some time, says Luke Kemp, a researcher at the Centre for the Study of Existential Risk who doesn’t identify as EA | Pool photo by Stefan Rousseau via AFP/Getty Images
CLTR’s head of AI policy, Jess Whittlestone, is in the process of being seconded to DSIT on a one day a week basis to assist on AI policy leading up to the AI Safety Summit, according to a CLTR August update seen by POLITICO. In the interim, she is informally advising several policy teams across DSIT.
A former specialist adviser to the Cabinet Office meanwhile, Markus Anderljung, is now head of policy at GovAI.
Kemp says he has expressed reservations about existential risk organizations attempting to get staff members seconded to government. “We can’t be trusted as objective and fair regulators or scholars, if we have such deep connections to the bodies we’re trying to regulate,” he says.
“I share the concern about AI companies dominating regulatory discussions, and have been advocating for greater independent expert involvement in the summit to reduce risks of regulatory capture,” said CLTR’s Head of AI Policy, Dr Jess Whittlestone. “It is crucial for U.K. AI policy to be informed by diverse perspectives.”
Instead of the risks of existing foundation models like GPT-4, EA-linked groups and AI companies tend to talk up the “emergent” risks of frontier models — a forward-looking stance that nudges the regulatory horizon into the future.
This framing “is a way of suggesting that that’s why you need to have Big Tech in the room – because they are the ones developing these frontier models,” suggests Aitken.
At the frontier
Earlier in July, CLTR and GovAI collaborated on a paper about how to regulate so-called frontier models, alongside members of DeepMind, OpenAI, and Microsoft and academics. The paper explored the controversial idea of licensing the most powerful AI models, a proposal that’s been criticized for its potential to cement the dominance of leading AI firms.
Earlier in July, CLTR and GovAI collaborated on a paper about how to regulate so-called frontier models, alongside members of DeepMind, OpenAI, and Microsoft and academics | Lionel Bonaventure/AFP via Getty Images
CLTR presented the paper to No. 10 with the prime minister’s special advisers on AI and the director and deputy director of DSIT in attendance, according to the CLTR memo.
Such ideas appear to be resonating. In addition to announcing the “Frontier AI Taskforce”, the government said in September that the AI Summit would focus entirely on the regulation of “frontier AI.”
The British government disputes the idea that its AI policy is narrowly focused. “We have engaged extensively with stakeholders in creating our AI regulation white paper, and have received a broad and diverse range of views as part of the recently closed consultation process which we will respond to in due course,” said a spokesperson.
Spokespeople for CLTR and CSER said that both groups focus on risks across the spectrum, from near-term to long-term, while a CLTR spokesperson stressed that it’s an independent and non-partisan think tank.
Some say that it’s the external circumstances that have changed, rather than the effectiveness of the EA lobby. CSER professor Haydn Belfield, who identifies as an EA, says that existential risk think tanks have been petitioning the government for years – on issues like pandemic preparedness and nuclear risk in addition to AI.
Although the government appears more receptive to their overtures now, “I’m not sure we’ve gotten any better at it,” he says. “I just think the world’s gotten worse.”
Update: This story has been updated to clarify Luke Kemp’s job title.