SARCELLES, France — In the usually lively “Little Jerusalem” neighborhood of Sarcelles, the only people loitering are gun-toting French soldiers on patrol.
Since Hamas’ deadly assault against Israel on October 7, this largely Jewish enclave in the northern suburbs of Paris has gone eerily quiet, with locals keeping their movements to a minimum, and with restaurants and cafés bereft of their regular clientele — fearing an increasing number of antisemitic attacks across France.
“People are afraid, in a state of shock, they’ve lost their love for life” said Alexis Timsit, manager of a kosher pizzeria. “My business is down 50 percent, there’s no bustle in the street, nobody taking a stroll,” he said in front of a large screen broadcasting round-the-clock coverage of the war.
France has seen more antisemitic incidents in the last three weeks than over the past year: 501 offenses ranging from verbal abuse and antisemitic graffiti, to death threats and physical assaults have been reported. Antisemitic acts under investigation include groups gathering in front of synagogues shouting threats and graffiti such as the words “killing Jews is a duty” sprayed outside a stadium in Carcassonne in the southwest. The interior minister has deployed extra police and soldiers at Jewish schools, places of worship and community centers since the attacks, and in Sarcelles that means soldiers guard school pick-ups and drop-offs.
“I try not to show my daughter that I’m afraid,” said Suedu Avner, who hopes the conflict won’t last too long. But a certain panic has taken hold in the community in the wake of the Hamas attacks, in some cases spreading like wildfire on WhatsApp groups. On one particularly tense day, parents even pulled their children out of school.
France is home to the largest Jewish community outside Israel and the U.S., estimated at about 500,000, and one of the largest Muslim communities in Europe. Safety concerns aren’t new to France’s Jewish community, as to some degree, it has remained on alert amid a string of terror attacks on French soil by Islamists over the last decade.
Israel’s war against Hamas is now threatening the fragile peace in places like Sarcelles, one of the poorest cities in France, where thousands of Jews live alongside mostly Muslim neighbors of North African origin, from immigrant backgrounds, and in low-income housing estates.
Authorities meanwhile are often torn by conflicting imperatives — between the Jews, who are fearful for their safety, and the Muslims, who feel an affinity for the Palestinian cause. During his visit to Israel and the Palestinian Territories, French President Emmanuel Macron himself struggled to strike a difficult balance between supporting Israel in its fight against Hamas, and calling for the preservation of Palestinian lives.
A community under threat
For Timsit, the threat is very real. His pizzeria was ransacked by rioters a couple of months ago, when the fatal shooting of a teenager by a police officer in a Paris suburb caused unrest in poor housing estates across France.
The attack was not antisemitic, he said, but was a violent reminder. In 2014, a pro-Palestinian demonstration protesting Israel’s ground offensive against Gaza degenerated into an antisemitic riot against Jewish shops. “All you need is a spark to set it off again,” said Timsit.
France’s Jews have seen an increase in antisemitic attacks since the early 2000s, a reality that cuts deep into the national psyche given the memories of France’s collaboration with Nazi Germany in the Second World War.
“The fear of violence [in France] appeared with the Second Intifada,” said Marc Hecker, a specialist on the Israeli-Palestinian conflict with IFRI think tank, with reference to the uprising against Israeli occupation in Palestinian Territories.
Patrick Haddad, the mayor of Sarcelles, is working to keep the communities together | Clea Caulcutt/POLITICO
“Every time the situation in the Near East flares up, there’s an increase in antisemitic offenses in France,” he added. The threat of antisemitic attacks has led to increased security at Jewish schools and synagogues, and has discouraged many French Jews from wearing their kippahs in some areas, according to Jewish organizations.
In addition to low-level attacks, French Jews are also a prime target for Islamists as France battles a wave of terrorist attacks that have hit schools, bars and public buildings, among other targets, in the last decade. In 2012, three children and a rabbi were shot dead at a Jewish school in Toulouse at point-blank range by Mohamed Merah, a gunman who had claimed allegiance to al-Qaida. In 2015, four people were killed at a kosher supermarket near Paris.
While Hamas, al-Qaida and ISIS networks are separate, Hecker warned that the scale of Hamas’s attack against Israel has “galvanized” Islamists across the board, once again sparking deep fears among France’s Jews.
Delicate local balance
Many of Sarcelles’ Jews are Sephardic — that is, of Spanish descent — and ended up in North Africa when Spain expelled its Jewish population in the Middle Ages. Most came to France after having lived in the former French colonies of Algeria and Tunisia. Sarcelles’ Muslim population therefore shares a cultural and linguistic history with its Jewish community, and the two groups have lived together in relative harmony for decades.
In his office, the mayor of Sarcelles, Patrick Haddad, stands under the twin gazes of Nelson Mandela and Marianne, the symbol of French republicanism, with pictures of both adorning his wall, as he reflects on the thus-far peaceful coexistence among the local population.
“There’s been not a single antisemitic attack in Sarcelles since the attacks … It’s been over two weeks, and we are holding things together,” he said, smiling despite the noticeable strain. Relations between the city’s Muslims and Jews are amicable, said Haddad, and locals on the streets are proud of their friendship with people of a different religion.
Israel’s war on Hamas is testing relations in Sarcelles, one of France’s poorest cities | Clea Caulcutt/POLITICO and Bertrand Guay/AFP via Getty Images
“Relations are easy, we share a similar culture, a lot of the Jews are originally from Tunisia, Algeria, they even speak some Arabic,” said Naima, a Muslim retiree who did not want to give her surname to protect her privacy. “My family, my husband and my children respect the Jews, but I know many who are angry with Israel,” said Naima, who moved to France from Algeria as a young adult.
“I’ve got Muslim friends, we get along fine, we don’t go around punching each other,” said Avner.
But for many, politics — and the Israeli-Palestinian conflict — is off-limits, and communities live relatively separate lives, with most Jewish pupils enrolled in religious schools. Many Jews from Sarcelles have also chosen to emigrate to Israel in recent years.
But Israel’s image as the ultimate, secure sanctuary for Jews has been shattered after Hamas killed more than 1,400 Israelis in horrific attacks, said Haddad.
“Where are [Jews] going to go if they are not safe in Israel? People’s fears have been magnified, they fear what is happening here, and they are anguished about what is happening in the ‘sanctuary state’ for Jews,” he said.
In a twist of the many tragic reversals of Jewish history, several French families have returned from Israel since the Hamas attacks to find temporary shelter in the relative peace of Sarcelles.
There are several reasons why you’d want to use a VPN on your iPhone. Let’s first briefly talk about what a VPN is.
What is a VPN?
A VPN, or virtual private network, is a service that creates a secure connection over the internet between your iPhone and a remote server. This connection hides your IP address and encrypts your data, making it harder for others to track your online activities.
Protect Your Privacy and Security
If you don’t want your ISP, the government, hackers, or your employer to see what you’re doing on the internet, a VPN can help. It adds an extra layer of security, especially when connected to a public Wi-Fi network, protecting your data from potential threats.
Access Geo-Restricted Content
A VPN can also be used to access geo-restricted content. For instance, if you’re traveling in another country but want to access the US version of Netflix, a VPN can make it seem like you’re still at home, allowing you to access content that is typically unavailable in other countries.
So, if you’re concerned about privacy and security or want to access geo-restricted content, a VPN is just what you need on your iPhone. Now let’s discuss which VPN to choose and how to set it up.
When choosing a VPN, consider what you need from the service and how much you’re willing to pay. While there are free VPN options available, it’s recommended to go for paid options for better encryption, faster speeds, and more features.
Here are three paid VPNs that are considered some of the best:
ExpressVPN
This VPN offers high-quality service and a user-friendly app. It has a wide range of server locations and excellent security features. ExpressVPN is priced at $99.95 per year.
NordVPN
NordVPN’s iOS app is simple to use and offers cheaper introductory pricing of $60 for the first year. It provides various speciality servers, like Onion over VPN, which lets you use the Onion network.
Surfshark
Surfshark’s iOS app is the most affordable option, priced at $60 per year ($48 introductory price for the first year). It comes with features like dynamic multihop connections, an IP rotator, and an ad and tracker blocker.
To explore more VPN options, you can check out our full list of the best iPhone VPNs in 2023. However, it’s important to note that free VPNs may offer weaker encryption, slower speeds, and potential security risks due to data collection and advertisements.
Using a VPN on your iPhone is simple. Here’s a step-by-step guide:
Go to the App Store and search for the VPN app you want to use. Download and install it on your iPhone.
Open the VPN application, create an account, and sign up for a plan. Look for any available free trials or special offers.
Choose a location and connect to the VPN server. You may be asked to install a new VPN profile on your iPhone, which can be done by tapping “Allow” and entering your passcode.
Once connected, explore the various settings in your VPN app. Many VPNs allow for multiple simultaneous connections, so you can protect your other devices as well.
If you want quick access to your VPN, go to Settings > VPN. From there, you can easily disconnect and connect to the last VPN server you were connected to.
By following these steps, you can enjoy the benefits of using a VPN on your iPhone and ensure your privacy and security while browsing the internet.
DENVER — Colorado’s highest court on Monday upheld the search of Google users’ keyword history to identify suspects in a 2020 fatal arson fire, an approach that critics have called a digital dragnet that threatens to undermine people’s privacy and their constitutional protections against unreasonable searches and seizures.
However the Colorado Supreme Court cautioned it was not making a “broad proclamation” on the constitutionality of such warrants and emphasized it was ruling on the facts of just this one case.
At issue before the court was a search warrant from Denver police requiring Google to provide the IP addresses of anyone who had searched over 15 days for the address of the home that was set on fire, killing five immigrants from the West African nation of Senegal.
After some back and forth over how Google would be able to provide information without violating its privacy policy, Google produced a spreadsheet of sixty-one searches made by eight accounts. Google provided the IP addresses for those accounts, but no names. Five of the IP addresses were based in Colorado and police obtained the names of those people through another search warrant. After investigating those people, police eventually identified three teens as suspects.
One of them, Gavin Seymour, asked the court to throw the evidence out because it violated the Fourth Amendment’s ban on unreasonable searches and seizures by being overbroad and not being targeted against a specific person suspected of a crime.
Search warrants to gather evidence are typically sought once police have identified a suspect and gathered some probable cause to believe they committed a crime. But in this case, the trail had run cold and police were seeking a “reverse keyword” warrant for the Google search history in a quest to identify possible suspects. Since the attack seemed targeted, investigators believed whoever set fire to the house would have searched for directions to it.
The state Supreme Court ruled that Seymour had a constitutionally protected privacy interest in his Google search history even though it was just connected with an IP address and not his name. While it also said it assumes that the warrant was “constitutionally defective” for not specifying an “individualized probable cause”, the court said it would not throw out the evidence because police were acting in good faith under what was known about the law at the time.
The court said it was not aware of any other state supreme court or federal appellate court that has dealt with this type of warrant before.
“Our finding of good faith today neither condones nor condemns all such warrants in the future. If dystopian problems emerge, as some fear, the courts stand ready to hear argument regarding how we should rein in law enforcement’s use of rapidly advancing technology. Today, we proceed incrementally based on the facts before us,” it said.
In a dissent, Justice Monica Marquez said such a wide-ranging search of a billion Google users’ search history without a particular target is exactly the kind the Fourth Amendment was designed to stop.
“At the risk of sounding alarmist, I fear that by upholding this practice, the majority’s ruling today gives constitutional cover to law enforcement seeking unprecedented access to the private lives of individuals not just in Colorado, but across the globe. And I fear that today’s decision invites courts nationwide to do the same,” she said in the dissent, which Justice Carlos Samour joined in.
In a statement, Google said it was important that the court’s ruling recognized the privacy and First Amendment interests involved in keyword searches.
“With all law enforcement demands, including reverse warrants, we have a rigorous process designed to protect the privacy of our users while supporting the important work of law enforcement,” it said.
The ruling allows the prosecution of Seymour and Kevin Bui, who were 16 at the time of the Aug. 5, 2020, fire, to move ahead in adult court on charges of first-degree murder, attempted murder, arson and burglary. Investigators allege Bui organized the attack on the home because he mistakenly believed people who had stolen his iPhone during a robbery lived there.
Telephone messages and an email sent to Seymour’s lawyers, Jenifer Stinson and Michael Juba, were not immediately returned. A lawyer for Bui, Christian Earle, could not be reached for comment.
A third teen, Dillon Siebert, who was 14 at the time and originally charged as a juvenile, pleaded guilty earlier this year to second-degree murder in adult court under a deal that prosecutors and the defense said balanced his lesser role in planning the fire, his remorse and interest in rehabilitation with the horror of the crime. He was sentenced to 10 years behind bars.
A federal judge is scheduled to hear arguments Thursday in a case filed by TikTok and five Montana content creators who want the court to block the state’s ban on the video sharing app before it takes effect Jan. 1.
U.S. District Judge Donald Molloy of Missoula is not expected to rule immediately on the request for a preliminary injunction.
Montana became the first state in the U.S. to pass a complete ban on the app, based on the argument that the Chinese government could gain access to user information from TikTok, whose parent company, ByteDance, is based in Beijing.
Content creators say the ban violates free speech rights and could cause economic harm for their businesses.
TikTok said in court filings that the state passed its law based on “unsubstantiated allegations,” that Montana cannot regulate foreign commerce and that the state could have passed a law requiring TikTok limit the kinds of data it could collect, or require parental controls, rather than trying to enact a complete ban.
Western governments have expressed worries that the popular social media platform could put sensitive data in the hands of the Chinese government or be used as a tool to spread misinformation. Chinese law allows the government to order companies to help it gather intelligence.
TikTok, which is negotiating with the federal government over its future in the U.S., has denied those allegations. But that hasn’t made the issue go away.
In a first-of-its kind report on Chinese disinformation released last month, the U.S. State Department alleged that ByteDance seeks to block potential critics of Beijing, including those outside of China, from using its platforms.
The report said the U.S. government had information as of late 2020 that ByteDance “maintained a regularly updated internal list” identifying people who were blocked or restricted from its platforms — including TikTok — “for reasons such as advocating for Uyghur independence.”
More than half of U.S. states and the federal government have banned TikTok on official devices. The company has called the bans “political theatre” and says further restrictions are unnecessary due to the efforts it is taking to protect U.S. data by storing it on Oracle servers.
The bill was brought to the Montana Legislature after a Chinese spy balloon flew over the state.
It would prohibit downloads of TikTok in the state and fine any “entity” — an app store or TikTok — $10,000 per day for each time someone “is offered the ability” to access or download the app. There would not be penalties for users.
The American Civil Liberties Union, its Montana chapter and Electronic Frontier Foundation, a digital privacy rights advocacy group, have submitted an amicus brief in support of the challenge. Meanwhile, 18 attorneys generals from mostly Republican-led states are backing Montana and asking the judge to let the law be implemented. Even if that happens, cybersecurity experts have said it could be challenging to enforce.
In asking for the preliminary injunction, TikTok argued that the app has been in use since 2017 and letting Montanans continue to use it will not harm the state.
Montana did not identify any evidence of actual harm to any resident as a result of using TikTok and even delayed the ban’s effective date until Jan. 1, 2024, the company said.
SANTA FE, N.M. — A group has been impersonating government officials, harassing New York residents at their homes and falsely accusing them of breaking the law, state officials have warned.
But what sounds like a scam aimed at people’s pocketbooks is actually part of a shakedown with a much different target: voters.
State prosecutors have sent a cease-and-desist order to a group called New York Citizens Audit demanding that it halt any “unlawful voter deception” and “intimidation efforts.”
It’s the type of tactic that concerns many state election officials across the country as conservative groups, some with ties to allies of former President Donald Trump and motivated by false claims of widespread fraud in 2020, push to access and sometimes publish state voter registration rolls, which list names, home addresses and in some cases party registration. One goal is to create free online databases for groups and individuals who want to take it upon themselves to try to find potential fraud.
The lists could find their way into the hands of malicious actors and individual efforts to inspect the rolls could disenfranchise voters through intimidation or canceled registrations, state election officials and privacy advocates warned. They worry that local election offices may be flooded with challenges to voter registration listings as those agencies prepare for the 2024 elections.
John Davisson, director of litigation at the Electronic Privacy Information Center, said the concern reflects the competing interests over voter data – a need to protect voter rolls from cybersecurity attacks against the desire to make them accessible so elections are transparent.
“It’s not surprising that this is a battleground right now,” he said.
Baseless claims of widespread voter fraud are part of what’s driving the efforts to obtain the rolls, leading to lawsuits over whether to hand over the data in several states, including Maine, New Mexico and Pennsylvania.
In New York, a warning from the state elections board preceded the cease-and-desist letter from the state attorney general’s office. Voters in 13 counties had been approached at their homes in recent weeks in an apparently coordinated effort by people impersonating election officials, in some cases wielding phony IDs, the board said. Residents were confronted about their voter registration status and accused of misconduct.
In one instance, people wearing identification badges accused a woman at her Glens Falls home of committing a crime by apparently being registered to vote in two counties, said Warren County spokesman Don Lehman. But the woman had already filed to change her registration and canvassers were apparently using out-of-date information, he said.
“She was quite shaken by the whole thing,” Lehman said. “She did nothing nefarious at all. Either these people don’t understand that or understand how the process works, but it seems like they were quite accusatory.”
State prosecutors found no evidence that any of the those contacted had committed voter fraud or any other type of crime, they said in their warning letter.
NY Citizens Audit emailed a statement that dismissed as “absurd” concerns that its canvassers might have impersonated an official or harassed anyone. Instead, the group urged election officials to investigate “each of these millions of suspected illegal registrations.”
“We train our people to do legal canvassing, and if ever verified, voter intimidation would be completely unacceptable and against our policy,” NY Citizens Audit Director Kim Hermance said in the statement.
One of the most ambitious groups, the Voter Reference Foundation, was founded after the 2020 presidential election by Republican Doug Truax of Illinois with a goal of posting online lists from every state. The VoteRef.com database so far includes information from 32 states and the District of Columbia and is run by Gina Swoboda, a former organizer of Trump’s 2020 campaign in Arizona.
A federal trial is scheduled to start later this month over the group’s fight to access and use New Mexico’s voter registration list.
The group also sued Pennsylvania, which refused to hand over the information and said that publishing it would put every registered voter at greater risk of identity theft or misuse of their information, said the state’s Office of Open Records.
Truax declined to speak to The Associated Press, but has said in a statement on the Pennsylvania case that, “We have a crisis of confidence in America when it comes to election results, and the answer is more transparency, not less.”
The head of elections in New Mexico, Democratic Secretary of State Maggie Toulouse Oliver, fears many voters might withdraw from registration lists as personal data is posted online. Her office cites email inquiries about how to cancel voter registrations during a short-lived canvassing effort by election activists last year in southern New Mexico.
“Voters can and should expect a reasonable amount of privacy,” said Toulouse Oliver, a Democrat. “What Voter Reference is doing is saying, ‘If you have doubts about the election and who is registered to vote and who is voting, here is every voter’s information. Go out and figure it out for yourself whether these people are real.’”
The Voter Reference Foundation argues that federal law is on its side, citing public disclosure provisions of the National Voter Registration Act that require states to make a “reasonable” effort to keep the registration lists free of people who died or moved away. The foundation also invokes free speech and due-process rights.
Nearly every state prohibits the use or transfer of the lists for commercial purposes, while several confine access to political candidates, parties for campaign purposes and some government activities.
In March, New Mexico banned the transfer or publication of voter data online, with felony penalties and possible fines of $100 per voter.
Virginia data was removed from VoteRef.com after Republicans and Democrats united last year to ban online publication of registrations.
In Maine, an ongoing legal dispute over privacy and the use of voter lists is pitting state election regulators against a conservative-backed group that has been highlighting and litigating what it says are shortcomings in election systems for a decade. It has assembled voter rolls from multiple states.
The state historically provided voter registration lists to candidates and political parties before being sued in 2019 for failing to provide its voter list to the Public Interest Law Foundation. In 2021, Maine’s governor signed a bill allowing the voter registration lists to be turned over to additional organizations, but with a stipulation that no voter names could be published in a way that compromises privacy.
The restrictions interfere with comparing lists across states, said the group’s president, J. Christian Adams, whose case against the state is scheduled for legal arguments Thursday at a Boston federal appeals court. Adams, a Republican, served on a commission Trump convened after his 2016 win to investigate voter fraud. The commission was disbanded without any finding of widespread fraud.
Maine Secretary of State Shenna Bellows, a Democrat, said residents sharing details about voters, including addresses, is a bad idea.
“In an era of conspiracies and lies about our elections, integrity of voter information is hugely important,” she said. “We want to make sure that no voters are targeted or harassed or threatened because of their decision to register and cast a ballot.”
___
Associated Press writers David Sharp in Portland, Maine, and Marc Levy in Harrisburg, Pennsylvania, also contributed to this report.
WASHINGTON — WASHINGTON (AP) — Hunter Biden sued Rudy Giuliani and another attorney Tuesday, saying the two wrongly accessed and shared his personal data after obtaining it from the owner of a Delaware computer repair shop.
The lawsuit was the latest in a new strategy by Hunter Biden to strike back against Republican allies of Donald Trump, who have traded and passed around his private data including purported emails and embarrassing images in their effort to discredit his father, President Joe Biden.
The suit accuses Giuliani and attorney Robert Costello of spending years “hacking into, tampering with, manipulating, copying, disseminating, and generally obsessing over” the data that was “taken or stolen” from Biden’s devices or storage, leading to the “total annihilation” of Biden’s digital privacy.
The suit also claims Biden’s data was “manipulated, altered and damaged” before it was sent to Giuliani and Costello, and has been further altered since then.
They broke laws against computer hacking when they did, according to the lawsuit. It seeks unspecified damages and a court order to return the data and make no more copies.
Costello used to represent Giuliani, but recently filed a lawsuit against the former New York City mayor saying he did not pay more than $1.3 million in legal bills.
A spokesman for Giuliani did not immediately return a message seeking comment Tuesday morning. Costello declined to comment. In February, he told The Associated Press that a letter from Hunter Biden’s lawyers that requested a Justice Department investigation of him and others related to the laptop was a “frivolous legal document” that “reeks of desperation because they know judgment day is coming for the Bidens.”
Tuesday’s lawsuit marks the latest turn in the long-running laptop saga, which began with a New York Post story in October 2020 that detailed some of the emails it says were found on the device related to Hunter Biden’s foreign business dealings. It was swiftly seized on by Trump as a campaign issue during the presidential election that year.
Biden doesn’t explicitly acknowledge that the laptop left at the computer shop was his, but says “at least some” of the data was on his iPhone or backed up to iCloud.
A Justice Department special counsel is also separately pursuing an investigation into Biden’s taxes, and has filed firearm possession charges against him, and he plans to plead not guilty. He’s also charged with tax crimes.
House Republicans, meanwhile, have continued to investigate every aspect of Hunter Biden’s business dealings and sought to tie them to his father, the president, as part of an impeachment inquiry. A hearing on Thursday is expected to detail some of their claims anew.
Hunter Biden, meanwhile, after remaining silent as the images are splayed across the country, has changed his tactic, and his allies have signaled there’s more to come. Over the past few months, he’s also sued a former aide to Trump over his alleged role in publishing emails and embarrassing images, and filed a lawsuit against the IRS saying his personal data was wrongly shared by two agents who testified as whistleblowers as part of a probe by House Republicans into his business dealings.
Biden has also pushed for an investigation into Giuliani and Costello, along with the Wilmington computer repair shop owner who has said Hunter Biden dropped a laptop off at his store in April 2019 and never returned to pick it up.
Giuliani provided the information to a reporter at the New York Post, which first wrote about the laptop, Biden’s attorney said in a letter pushing for a federal investigation.
___
Associated Press writer Eric Tucker contributed to this report.
WASHINGTON — Hunter Biden has gone on the offensive against his Republican critics, arguing in a new lawsuit that although he is the son of the president of the United States, he shouldn’t be treated differently than any other American.
The lawsuit against the IRS is only the latest in a series of counterpunches by the president’s son. But while Hunter Biden’s lawyers might think that an aggressive approach is the best legal strategy for Biden the son, that might not be what’s best for Biden the father as he seeks reelection and tries to keep the public focused on his policy achievements.
The president has had little to say about his son’s legal woes — which now include a felony indictment — beyond that Hunter did nothing wrong and he loves his son. The White House strategy has been to keep the elder Biden head-down and focused on governing, reasoning that that’s what voters will prioritize, while working to keep Hunter’s troubles at arm’s distance.
There’s one hopeful school of thought among the president’s allies that even if all the headlines about Hunter Biden aren’t a plus for the president’s reelection campaign, the legal process could ultimately clear the air in a positive way.
“Obviously, the White House and Hunter’s teams are looking at it from different perspectives,” said Democratic political strategist David Brock. “It’s important for the facts to reach the public, and when that happens, I think ultimately that’s beneficial to the president.”
But privately, some Democrats are concerned that Hunter Biden’s legal problems could harm Biden heading into 2024 and pose difficulties for Democrats in tight House races, according to people familiar with the matter who were not authorized to speak publicly and spoke to The Associated Press on condition of anonymity.
The lawsuit that Hunter Biden filed Monday against the IRS maintains that two agents who testified as whistleblowers violated his privacy by publicly disclosing his tax data as part of a probe by House Republicans into his business dealings.
Hunter Biden’s team last week sued a former Donald Trump aide over his alleged role in publishing emails and embarrassing images of the younger Biden. And his team also has asked state and federal agencies to open a criminal probe into Trump allies for accessing and spreading his personal data.
Hunter Biden agreed in June to plead guilty to two tax misdemeanors and avert prosecution on a gun charge by enrolling in a diversion program. But the agreement unraveled following a July 26 court hearing that was meant to end the case, and the younger Biden was then indicted for a felony weapons charge.
His legal woes have increasingly complicated matters for the president, who also faces an impeachment inquiry by House Republicans seeking to link the president to the business dealings of his son. While Hunter Biden did broker on his family name in business dealings, Republicans have so far unearthed no significant evidence of wrongdoing by the elder Biden, who spoke often to his son as vice president and did stop by a business dinner with his son’s associates.
Biden hasn’t had much to say about the impeachment drive. And he also has kept his distance from the Justice Department prosecutions of both his son and Donald Trump.
Now, Hunter Biden could be heading to trial in the midst of his father’s reelection effort. That suits Republicans, who are eager to distract from the multiple criminal indictments of Trump, the early GOP primary front-runner, whose trials could be unfolding at the same time.
Hunter Biden’s allies have argued the plea deal fell through in part because Justice Department officials bowed to pressure from Republicans who claimed he was getting a “sweetheart deal” to end a five-year investigation into his tax and business dealings.
“This is just the beginning and far from the end of Hunter and his team going on offense and fighting back,” said Michael LaRosa, a former special assistant to the president.
Their previous strategy of “being unresponsive has only led to Republicans filling a void with disinformation, smears, lies, and conspiracy theories that have severely damaged the president’s image and reputation, as you can see in poll after poll. Somebody has to be out there correcting the record and fighting back,” LaRosa said.
Polling reflects the impact on the president of the drumbeat of negative headlines.
Roughly 1 in 3 Americans are highly concerned about whether Joe Biden may have committed wrongdoing related to his son’s business dealings, according to a recent poll from The Associated Press-NORC Center for Public Affairs Research. About half of Americans say they have little or no confidence that the Justice Department is handling its investigation into Hunter Biden in a fair and nonpartisan way.
The political divide on these points is stark: 66% of Republicans — and just 7% of Democrats — are very or extremely concerned about whether Joe Biden committed wrongdoing when it comes to his son’s business dealings.
The headlines are likely to continue given the impeachment inquiry that’s just ramping up and the special counsel’s decision to file federal gun charges against Hunter Biden.
He is accused of lying on the forms he completed to buy a gun when he stated that he wasn’t a drug user at the time of the purchase. Hunter Biden, according to his memoir, tumbled into drug addiction after the death of his older brother, Beau, in 2015.
Earlier this year Hunter Biden hired high-profile attorney Abbe Lowell, a legal heavyweight known for also representing Jared Kushner and Ivanka Trump.
Shortly after, the criminal referral was requested. In March, Hunter Biden sued a Delaware-based computer repairman who was said to have a laptop that belonged to the president’s son and who disseminated data from it. Five days ago, he sued the Trump aide over the publishing of the data. And on Monday, he sued the IRS.
“Mr. Biden has no fewer or lesser rights than any other American citizen, and no government agency or government agent has free rein to violate his rights simply because of who he is,” the lawsuit against the IRS states.
Prosecutions for lying on a federal gun application are uncommon, particularly when there’s no allegation that the gun was bought to carry out a crime, experts said. There are also questions about the constitutionality of the federal ban on gun possession by people who use drugs in light of a Supreme Court ruling that expanded gun rights.
Hunter Biden’s lawyers have signaled they will try to argue that an agreement sparing him prosecution on a felony gun charge should remain in place even though the plea deal on misdemeanor tax offenses largely unraveled.
If the case goes to trial, it could be a tough sell to a jury.
“Addiction is something that touches a lot of Americans and the notion that this person who was in trouble with drug use and for 11 days owned a firearm that was never used for anything whatsoever, that’s not going to sit well at a federal felony criminal trial with a lot of jurors,” said Jennifer Rodgers, a former federal prosecutor.
“And it is not even touching on the issue of whether people think that he is being prosecuted because he’s Hunter Biden,” she said.
___
Durkin reported from Boston. Associated Press writer Michael Balsamo contributed to this report.
LONDON — The gloves are off in the U.K. government’s deepening spat with tech giant Meta.
On Wednesday, Britain’s Home Secretary Suella Braverman unveiled a fresh campaign aimed at making the Mark Zuckerberg-led tech giant rethink its plan to roll out end-to-end encryption on Facebook and Instagram — a move she says will hamper the police’s ability to catch pedophiles.
At a background briefing for reporters on Tuesday, Home Office officials used graphic language to describe the types of child sexual abuse material that they say risks going undetected if Meta goes ahead with its plans. A video put together as part of the campaign features a victim of child sex abuse appealing directly to Meta chief Mark Zuckerberg to rethink plans to roll out encryption.
The National Crime Agency has estimated that making messages on Facebook Messenger and Instagram end-to-end encrypted will wipe out more than 85 percent of the platforms’ reports of online child sexual abuse material.
Meta, which aims to finalize the encryption rollout by the end of the year, has said it plan to continue policing its platforms for grooming and the sharing of child abuse content. It will do this by, for example, watching for suspicious behavior from accounts and providing a range of controls to help kids avoid harm.
But Braverman said she’s not yet been convinced that these measures will make up for the shortfall in reports that the encryption changes are expected to bring about, prompting her to write to the tech giant in July asking it to stop its encryption rollout if it can’t give stronger assurances.
“Meta has failed to provide assurances that they will keep their platforms safe from sickening abusers,” Braverman said in a press release. “They must develop appropriate safeguards to sit alongside their plans for end-to-end encryption.”
“We don’t think people want us reading their private messages so have spent the last five years developing robust safety measures to prevent, detect and combat abuse while maintaining online security,” said a Meta spokesperson.
The company on Wednesday also published an updated report setting out these measures, such as restricting people over 19 from messaging teens who don’t follow them and using technology to identify and take action against malicious behaviour.
The bill, which passed its final parliamentary hurdle Tuesday, would empower Britain’s comms regulator Ofcom to force tech companies to monitor messenger apps for illegal child abuse content. That’s proven controversial, with dozens of cryptography experts saying that the powers would effectively undermine end-to-end encryption — tech that enables only the sender and receiver to view messages.
Tech execs like Signal’s Meredith Whittaker and WhatsApp’s Will Cathcart have suggested they’d rather have their encrypted services blocked in the U.K. than undermine privacy for millions of users on their apps.
But Ofcom officials have previously said there’d be a high bar for them to mandate monitoring on encrypted apps, while any order for Meta to scan its messenger apps for content would prove highly contentious for the regulator.
That’s what’s prompted the U.K. government to lobby for Meta to rethink its plans in the first place.
“We urge companies looking to introduce end-to-end encryption to their services to think carefully about the impact on younger, vulnerable users,” said Susie Hargreaves, chief executive of child protection group the Internet Watch Foundation in a statement.
Indiana’s attorney general has sued the state’s largest hospital system, claiming it violated patient privacy laws when a doctor publicly shared the story of an Ohio girl who traveled to Indiana for an abortion
ByThe Associated Press
September 17, 2023, 12:48 PM
FILE – Indiana Attorney General Todd Rokita speaks in Schererville, Ind., Nov. 8, 2022. Indiana’s attorney general has sued the state’s largest hospital system, claiming it violated patient privacy laws when a doctor publicly shared the story of an Ohio girl who traveled to Indiana for an abortion. The lawsuit Friday Sept. 15 2023, marks Attorney General Rokita’s latest attempt to seek disciplinary legal action against Dr. Caitlin Bernard. (AP Photo/Darron Cummings, File)
The Associated Press
INDIANAPOLIS — Indiana’s attorney general has sued the state’s largest hospital system, claiming it violated patient privacy laws when a doctor publicly shared the story of an Ohio girl who traveled to Indiana for an abortion.
The lawsuit, filed Friday in Indianapolis federal court, marked Attorney General Todd Rokita’s latest attempt to seek disciplinary legal action against Dr. Caitlin Bernard. The doctor’s account of a 10-year-old rape victim traveling to Indiana to receive abortion drugs became a flashpoint in the abortion debate days after the U.S. Supreme Court overturned Roe v. Wade last summer.
Rokita, a Republican, is stridently anti-abortion and Indiana was the first state to approve abortion restrictions after the court’s decision. The near-total abortion ban recently took effect after legal battles.
“Neither the 10-year-old nor her mother gave the doctor authorization to speak to the media about their case,” the lawsuit stated. “Rather than protecting the patient, the hospital chose to protect the doctor, and itself.”
The lawsuit named Indiana University Health and IU Healthcare Associates. It alleged the hospital system violated HIPAA, the federal Health Insurance Portability and Accountability Act, and a state law for not protecting the patient’s information.
Indiana’s medical licensing board reprimanded Bernard in May, saying she didn’t abide by privacy laws by talking publicly about the girl’s treatment. It was far short of the medical license suspension that Rokita’s office sought.
Still, the board’s decision received widespread criticism from medical groups and others who called it a move to intimidate doctors.
Hospital system officials have argued that Bernard didn’t violate privacy laws.
“We continue to be disappointed the Indiana Attorney General’s office persists in putting the state’s limited resources toward this matter,” IU Health said in a statement. “We will respond directly to the AG’s office on the filing.”
In July, a 28-year-old man was sentenced to life in prison for the child’s rape.
Indiana’s attorney general has sued the state’s largest hospital system, claiming it violated patient privacy laws when a doctor publicly shared the story of an Ohio girl who traveled to Indiana for an abortion
ByThe Associated Press
September 17, 2023, 12:48 PM
FILE – Indiana Attorney General Todd Rokita speaks in Schererville, Ind., Nov. 8, 2022. Indiana’s attorney general has sued the state’s largest hospital system, claiming it violated patient privacy laws when a doctor publicly shared the story of an Ohio girl who traveled to Indiana for an abortion. The lawsuit Friday Sept. 15 2023, marks Attorney General Rokita’s latest attempt to seek disciplinary legal action against Dr. Caitlin Bernard. (AP Photo/Darron Cummings, File)
The Associated Press
INDIANAPOLIS — Indiana’s attorney general has sued the state’s largest hospital system, claiming it violated patient privacy laws when a doctor publicly shared the story of an Ohio girl who traveled to Indiana for an abortion.
The lawsuit, filed Friday in Indianapolis federal court, marked Attorney General Todd Rokita’s latest attempt to seek disciplinary legal action against Dr. Caitlin Bernard. The doctor’s account of a 10-year-old rape victim traveling to Indiana to receive abortion drugs became a flashpoint in the abortion debate days after the U.S. Supreme Court overturned Roe v. Wade last summer.
Rokita, a Republican, is stridently anti-abortion and Indiana was the first state to approve abortion restrictions after the court’s decision. The near-total abortion ban recently took effect after legal battles.
“Neither the 10-year-old nor her mother gave the doctor authorization to speak to the media about their case,” the lawsuit stated. “Rather than protecting the patient, the hospital chose to protect the doctor, and itself.”
The lawsuit named Indiana University Health and IU Healthcare Associates. It alleged the hospital system violated HIPAA, the federal Health Insurance Portability and Accountability Act, and a state law for not protecting the patient’s information.
Indiana’s medical licensing board reprimanded Bernard in May, saying she didn’t abide by privacy laws by talking publicly about the girl’s treatment. It was far short of the medical license suspension that Rokita’s office sought.
Still, the board’s decision received widespread criticism from medical groups and others who called it a move to intimidate doctors.
Hospital system officials have argued that Bernard didn’t violate privacy laws.
“We continue to be disappointed the Indiana Attorney General’s office persists in putting the state’s limited resources toward this matter,” IU Health said in a statement. “We will respond directly to the AG’s office on the filing.”
In July, a 28-year-old man was sentenced to life in prison for the child’s rape.
SAN FRANCISCO — You may not know it, but thousands of often shadowy companies routinely traffic in personal data you probably never agreed to share — everything from your real-time location information to private financial details. Even if you could identify these data brokers, there isn’t much you can do about their activities, including in California, which has some of the strongest digital privacy laws in the U.S.
That’s on the verge of changing. Both houses of the California state Legislature have passed the Delete Act, which would establish a “one stop shop” where individuals could order hundreds of data brokers registered in the state to delete their personal data — and to cease acquiring and selling it in the future — with a single request.
The Delete Act isn’t law yet. Democratic Gov. Gavin Newsom still has to decide whether to sign the measure, whose impact could potentially extend well beyond state lines given California’s history of setting similar trends.
Here’s what you need to know.
While California law already gives individuals the right to request data deletion, doing so currently require making separate requests to hundreds of data brokers registered in the state, many with their own unique requirements for drafting and handling such requests. Even then, nothing stops these companies from simply reacquiring the data after they delete it.
The Delete Act would require the state’s new privacy office, the California Privacy Protection Agency, to set up a website where consumers can verify their identity and then make a single request to delete their personal data held by data brokers and to opt out of future tracking. Proponents call it a “do not track” signal similar to the “do not call” list for telemarketers maintained by the Federal Trade Commission.
California already regulates data brokers, but the Delete Act would strengthen those provisions by requiring the companies to disclose more information about the data they collect on consumers and beefing up the state’s enforcement mechanisms.
The Electronic Privacy Information Center, a Washington, D.C., nonprofit focused on bolstering the right to privacy, defines data brokers as companies that collect and categorize personal information, usually to build profiles on millions of Americans that the companies can then rent, sell or use to provide services.
The data they collect, per EPIC, can include: “names, addresses, telephone numbers, email addresses, gender, age, marital status, children, education, profession, income, political preferences, and cars and real estate owned.”
That is in addition to “information on an individual’s purchases, where they shop, and how they pay for their purchases,” plus “health information, the sites we visit online, and the advertisements we click on. And thanks to the proliferation of smartphones and wearables, data brokers collect and sell real-time location data.”
Privacy advocates have warned for years that location and seemingly non-specific personal data — often collected by advertisers and amassed and sold by brokers — can be used to identify individuals. They also charge that the data often isn’t well secured and that the brokers aren’t covered by laws that require the clear consent of the person being tracked. They have argued for both legal and technical protections so consumers can push back.
Data brokers say they get a bad rap for serving a vital need.
Dan Smith, president of the Consumer Data Industry Association, which describes itself as “the voice of the consumer reporting industry,” called the Delete Act “severely flawed” and warned in a Wednesday release that the change could lead to unintended consequences by undermining consumer fraud protections, hurting the competitiveness of small businesses and entrenching big platforms such as Facebook and Google that collect vast amounts of consumer data but don’t sell it.
Smith also argued that the heart of the bill — the one-stop data deletion program — could potentially allow malicious outsiders to impersonate consumers and delete their data without permission. The organization also argues that the cost of the legislation will be much greater than California regulators currently suggest.
In other respects, though, the information collected by these companies can be startlingly easy to abuse. The general lack of U.S. restrictions on what brokers can do with the vast amount of data they collect means there’s aren’t many legal protections to prevent outsiders from spying on politicians, celebrities and just about anyone who is a target of idle curiosity, or malice.
In mid-2021, for instance, the U.S. Conference of Catholic Bishops announced the resignation of its top administrative official, Monsignor Jeffrey Burrill, ahead of a report by the Catholic news outlet The Pillar probing his private romantic life. The Pillar said it obtained “commercially available” location data from an unnamed vendor that was “correlated” to Burrill’s phone to determine he had visited gay bars and private residences while using Grindr, a dating app popular with gay people.
The Pillar alleged “serial sexual misconduct” by Burrill, as homosexual activity is considered sinful under Catholic doctrine and priests are expected to remain celibate. Following an extended leave, Burrill resumed his ministry in the small town of West Salem, Wisconsin, according to the Catholic News Service.
SAN FRANCISCO — You may not know it, but thousands of often shadowy companies routinely traffic in personal data you probably never agreed to share — everything from your real-time location information to private financial details. Even if you could identify these data brokers, there isn’t much you can do about their activities, including in California, which has some of the strongest digital privacy laws in the U.S.
That’s on the verge of changing. Both houses of the California state Legislature have passed the Delete Act, which would establish a “one stop shop” where individuals could order hundreds of data brokers registered in the state to delete their personal data — and to cease acquiring and selling it in the future — with a single request.
The Delete Act isn’t law yet. Democratic Gov. Gavin Newsom still has to decide whether to sign the measure, which potentially could extend well beyond state lines given California’s history of setting similar trends.
Here’s what you need to know.
While California law already gives individuals the right to request data deletion, doing so currently require making separate requests to hundreds of data brokers registered in the state, many with their own unique requirements for drafting and handling such requests. Even then, nothing stops these companies from simply reacquiring the data after they delete it.
The Delete Act would require the state’s new privacy office, the California Privacy Protection Agency, to set up a website where consumers can verify their identity and then make a single request to delete their personal data held by data brokers and to opt out of future tracking. Proponents call it a “do not track” signal similar to the “do not call” list for telemarketers maintained by the Federal Trade Commission.
California already regulates data brokers, but the Delete Act would strengthen those provisions by requiring the companies to disclose more information about the data they collect on consumers and beefing up the state’s enforcement mechanisms.
The Electronic Privacy Information Center, a Washington, D.C., nonprofit focused on bolstering the right to privacy, defines data brokers as companies that collect and categorize personal information, usually to build profiles on millions of Americans that the companies can then rent, sell or use to provide services.
The data they collect, per the EPIC, can include: “names, addresses, telephone numbers, email addresses, gender, age, marital status, children, education, profession, income, political preferences, and cars and real estate owned.”
That is in addition to “information on an individual’s purchases, where they shop, and how they pay for their purchases,” plus “health information, the sites we visit online, and the advertisements we click on. And thanks to the proliferation of smartphones and wearables, data brokers collect and sell real-time location data.”
Privacy advocates have warned for years that location and seemingly non-specific personal data — often collected by advertisers and amassed and sold by brokers — can be used to identify individuals. They also charge that the data often isn’t well secured and that the brokers aren’t covered by laws that require the clear consent of the person being tracked. They have argued for both legal and technical protections so consumers can push back.
Data brokers say they get a bad rap for serving a vital need.
Dan Smith, president of the Consumer Data Industry Association, which describes itself as “the voice of the consumer reporting industry,” called the Delete Act “severely flawed” and warned in a Wednesday release that the change could lead to unintended consequences by undermining consumer fraud protections, hurting the competitiveness of small businesses and entrenching big platforms such as Facebook and Google that collect vast amounts of consumer data but don’t sell it.
Smith also argued the heart of the bill — the one-stop data deletion program — could potentially allow malicious outsiders to impersonate consumers and delete their data without permission, although he didn’t explain what a third party might have to gain by deleting a consumer’s data without permission.
The Delete Act specifically exempts credit reporting agencies such as Experian, Equifax and TransUnion, whose reports are often required for big-ticket consumer purchases such as homes or cars.
The CDIA did not immediately reply to a request for clarification.
In other respects, though, the information collected by these companies can be startlingly easy to abuse. The general lack of U.S. restrictions on what brokers can do with the vast amount of data they collect means there’s aren’t many legal protections to prevent outsiders from spying on politicians, celebrities and just about anyone who is a target of idle curiosity, or malice.
In mid-2021, for instance, the U.S. Conference of Catholic Bishops announced the resignation of its top administrative official, Monsignor Jeffrey Burrill, ahead of a report by the Catholic news outlet The Pillar probing his private romantic life. The Pillar said it obtained “commercially available” location data from an unnamed vendor that was “correlated” to Burrill’s phone to determine he had visited gay bars and private residences while using Grindr, a dating app popular with gay people.
The Pillar alleged “serial sexual misconduct” by Burrill, as homosexual activity is considered sinful under Catholic doctrine and priests are expected to remain celibate. Following an extended leave, Burrill resumed his ministry in the small town of West Salem, Wisconsin, according to the Catholic News Service.
LONDON — Back in the spring, Britain was sounding pretty relaxed about the rise of AI. Then something changed.
The country’s artificial intelligence white paper — unveiled in March — dealt with the “existential risks” of the fledgling tech in just four words: high impact, low probability.
Less than six months later, Prime Minister Rishi Sunak seems newly troubled by runaway AI. He has announced an international AI Safety Summit, referred to “existential risk” in speeches, and set up an AI safety taskforce with big global aspirations.
Helping to drive this shift in focus is a chorus of AI Cassandras associated with a controversial ideology popular in Silicon Valley.
Known as “Effective Altruism,” the movement was conceived in the ancient colleges of Oxford University, bankrolled by the Silicon Valley elite, and is increasingly influential on the U.K.’s positioning on AI.
Not everyone’s convinced it’s the right approach, however, and there’s mounting concern Britain runs the risk of regulatory capture.
The race to ‘God-like AI’
Effective altruists claim that super-intelligent AI could one day destroy humanity, and advocate policy that’s focused on the distant future rather than the here-and-now. Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs.
“The view is that the outcome of artificial super-intelligence will be binary,” says Émile P. Torres, philosopher and former EA, turned critic of the movement. “That if it’s not utopia, it’s annihilation.”
In the U.K., key government advisers sympathetic to the movement’s concerns, combined with Sunak’s close contact with leaders of the AI labs – which have longstanding ties to the movement – have helped push “existential risk” right up the U.K.’s policy agenda.
When ChatGPT-mania reached its zenith in April, tech investor Ian Hogarth penned a viral Financial Times article warning that the race to “God-like AI” “could usher in the obsolescence or destruction of the human race” – urging policymakers and AI developers to pump the brakes.
It echoed the influential “AI pause” letter calling for a moratorium on “giant AI experiments,” and, in combination with a later letter saying AI posed an extinction risk, helped fuel a frenzied media cycle that prompted Sunak to issue a statement claiming he was “looking very carefully” at this class of risks.
Known as “Effective Altruism,” the movement was conceived in the ancient colleges of Oxford University, bankrolled by the Silicon Valley elite, and is increasingly influential on the U.K.’s positioning on AI | Carl Court/Getty Images
“These kinds of arguments around existential risk or the idea that AI would develop super-intelligence, that was very much on the fringes of credible discussion,” says Mhairi Aitken, an AI ethics researcher at the Alan Turing Institute. “That’s really dramatically shifted in the last six months.”
The EA community credited Hogarth’s FT article with telegraphing these ideas to a mainstream audience, and hailed his appointment as chair of the U.K.’s Foundation Model Taskforce as a significant moment.
Under Hogarth, who has previously invested in AI labs Anthropic, Faculty, Helsing, and AI safety firm Conjecture, the taskforce announced a new set of partners last week – a number of whom have ties to EA.
Three of the four partner organizations on the lineup are bankrolled by EA donors. The Centre for AI Safety is the organization behind the “AI extinction risk” letter (the “AI pause” letter was penned by another EA-linked organization, the Future of Life Institute). Its primary funding – to the tune of $5.2 million – comes from major EA donor organization, Open Philanthropy.
Another partner is Arc Evals, which “works on assessing whether cutting-edge AI systems could pose catastrophic risks to civilization.”
It’s a project of the Alignment Research Centre, an organization that has received $1.5 million from Open Philanthropy, $1.25 million from high-profile EA Sam Bankman-Fried’s FTX Foundation (which it promised to return after the implosion of his crypto empire), and $3.25 million from the Survival and Flourishing Fund, set up by Skype founder and prominent EA, Jaan Tallinn. Arc Evals is advised by Open Philanthropy CEO, Harold Karnofsky.
Finally, the Community Intelligence Project, a body working on new governance models for transformative technology, began life with an FTX regrant, and a co-founder appealed to the EA community for funding and expertise this year.
Joining the taskforce as one of two researchers is Cambridge professor David Krueger, who has received a $1 million grant from Open Philanthropy to further his work to “reduce the risk of human extinction resulting from out-of-control AI systems”. He describes himself as “EA-adjacent.” One of the PhD students Kruger advises, Nitarshan Rajkumar, has been working with the British government’s Department for Science, Innovation and Technology (DSIT) as an AI policy adviser since April.
A range of national security figures and renowned computer scientist, Yoshua Bengio, are also joining the taskforce as advisers.
Combined with its rebranding as a “Frontier AI Taskforce” which projects its gaze into the future of AI development, the announcements confirmed the ascendancy of existential risk on the U.K.’s AI agenda.
‘X-risk’
Hogarth told the FT that biosecurity risks – like AI systems designing novel viruses – and AI-powered cyber-attacks weigh heavily on his mind.The taskforce is intended to address these threats, and to help build safe and reliable “frontier” AI models.
When ChatGPT-mania reached its zenith in April, tech investor Ian Hogarth penned a viral Financial Times article warning that the race to “God-like AI” “could usher in the obsolescence or destruction of the human race” | John Phillips/Getty Images
“The focus of the Frontier AI Taskforce and the U.K.’s broader AI strategy extends to not only managing risk, but ensuring the technology’s benefits can be harnessed and its opportunities realized across society,” said a government spokesperson, who disputed the influence of EA on its AI policy.
But some researchers worry that the more prosaic threats posed by today’s AI models, like bias, data privacy, and copyright issues, have been downgraded. It’s “a really dangerous distraction from the discussions we need to be having around regulation of AI,” says Aitken. “It takes a lot of the focus away from the very real and ethical risks and harms that AI presents today.”
The EA movement’s links to Silicon Valley also prompt some to question its objectivity. The three most prominent AI labs, OpenAI, DeepMind and Anthropic, all boast EA connections – with traces of the movement variously imprinted on their ethos, ideology and wallets.
Tech mogul Elon Musk claims to be a fan of the closely related “longtermist” ideology, calling it a “close match” to his own. Musk recently hired Dan Hendrycks, director of Center for AI Safety, as an adviser to his new start-up, xAI, which is also doing its part to prevent the AI apocalypse.
To counter the threat, the EA movement is throwing its financial heft behind the field of AI safety. Head of Open Philanthropy, Harold Karnofsky,wrote a February blog post announcing a leave of absence to devote himself to the field, while an EA career advice center, 80,000 hours, recommends “AI safety technical research” and “shaping future governance of AI” as the two top careers for EAs.
Tech mogul Elon Musk claims to be a fan of the closely related “longtermist” ideology, calling it a “close match” to his own | Dimitrios Kambouris/Getty Images for The Met Museum/Vogue
Trading in an insular jargon of “X-risk” (existential risks) and “p(doom)” (the probability of our impending annihilation), the AI-focused branch of effective altruism is fixated on issues like “alignment” – how closely AI models are attuned to humanity’s value systems – amid doom-laden warnings about “proliferation” – the unchecked propagation of dangerous AI.
Despite its popularity among a cohort of technologists,critics say the movement’s thinking lacks evidence and is alarmist. A vocal critic, former Googler Timnit Gebru, has denounced this “dangerous brand of AI safety,” noting that she’d seen the movement gain “alarming levels of influence” in Silicon Valley.
Meanwhile, the “strong intermingling” of EAs and companies building AI “has led…this branch of the community to be very subservient to the AI companies,” says Andrea Miotti, head of strategy and governance at AI safety firm Conjecture. He calls this a “real regulatory capture story.”
The pitch to industry
Citing the Center for AI Safety’s extinction risk letter, Hogarth called on AI specialists and safety researchers to join the taskforce’s efforts in June, noting that at “a pivotal moment, Rishi Sunak has stepped up and is playing a global leadership role.”
On stage at the Tony Blair Institute conference in July, Hogarth – perspiring in the midsummer heat but speaking with composed conviction – struck an optimistic note. “We want to build stuff that allows for the U.K. to really have the state capacity to, like, engineer the future here,” he said.
Although the taskforce was initially intended to build up sovereign AI capability, Hogarth’s arrival saw a new emphasis on AI safety. The U.K. government’s £100 million commitment is “the largest amount ever committed to this field by a nation state,” he tweeted.
Despite its popularity among a cohort of technologists,critics say the movement’s thinking lacks evidence and is alarmist | Hollie Adams/Getty Images
The taskforce recruitment ad was shared on the Effective Altruism forum, and Hogarth’s appointment was announced in Effective Altruism UK’s July newsletter.
Hogarth is not the only one in government who appears to be sympathetic to the EA movement’s arguments. Matt Clifford, chair of government R&D body, ARIA, and adviser to the AI taskforce as well as AI sherpa for the safety summit, has urged EAs to jump aboard the government’s latest AI safety push.
“I would encourage any of you who care about AI safety to explore opportunities to join or be seconded into government, because there is just a huge gap of knowledge and context on both sides,” he said at the Effective Altruism Global conference in London in June.
“Most people engaged in policy are not familiar … with arguments that would be familiar to most people in this room about risk and safety,” he added, but cautioned that hyping apocalyptic risks was not typically an effective strategy when it came to dealing with policymakers.
Clifford said that ARIA would soon announce directors who will be in charge of grant-giving across different areas. “When you see them, you will see there is actually a pretty good overlap with some prominent EA cause areas,” he told the crowd.
A British government spokesperson said Clifford is “not part of the core Effective Altruism movement.”
Civil service ties
Influential civil servants also have EA ties. Supporting the work of the AI taskforce is Chiara Gerosa, who in addition to her government work is facilitating an introductory AI safety course “for a cohort of policy professionals” for BlueDot Impact, an organization funded by Effective Ventures, a philanthropic fund that supports EA causes.
The course “will get you up to speed on extreme risks from AI and governance approaches to mitigating these risks,” according to the website, which states alumni have gone on to work for the likes of OpenAI, GovAI, Anthropic, and DeepMind.
People close to the EA movement say that its disciples see the U.K.’s AI safety push as encouragement to get involved and help nudge policy along an EA trajectory.
EAs are “scrambling to be part of Rishi Sunak’s announced Foundation Model Taskforce and safety conference,” according to an AI safety researcher who asked not to be named as they didn’t want to risk jeopardizing EA connections.
EAs are “scrambling to be part of Rishi Sunak’s announced Foundation Model Taskforce and safety conference,” according to an AI safety researcher | Pool photo by Justin Tallis via AFP/Getty Images
“One said that while Rishi is not the ‘optimal’ candidate, at least he knows X-risk,” they said. “And that ‘we’ need political buy-in and policy.”
“The foundation model taskforce is really centring the voices of the private sector, of industry … and that in many cases overlaps with membership of the Effective Altruism movement,” says Aitken. “That to me, is very worrying … it should really be centring the voices of impacted communities, it should be centring the voices of civil society.”
Jack Stilgoe, policy co-lead of Responsible AI, a body funded by the U.K.’s R&D funding agency, is concerned about “the diversity of the taskforce.” “If the agenda of the taskforce somehow gets captured by a narrow range of interests, then that would be really, really bad,” he says, adding that the concept of alignment “offers a false solution to an imaginary problem.”
A spokesperson for Open Philanthropy, Michael Levine, disputed that the EA movement carried any water for AI firms. “Since before the current crop of AI labs existed, people inspired by effective altruism were calling out the threats of AI and the need for research and policies to reduce these risks; many of our grantees are now supporting strong regulation of AI over objections from industry players.”
From Oxford to Whitehall, via Silicon Valley
Birthed at Oxford University by rationalist utilitarian philosopher William MacAskill, EA began life as a technocratic preoccupation with how charitable donations could be optimized to wring out maximal benefit for causes like global poverty and animal welfare.
Over time, it fused with transhumanist and techno-utopian ideals popular in Silicon Valley, and a mutated version called “long-termism” that is fixated on ultra-long-term timeframes now dominates. MacAskill’s most recent book What We Owe the Future conceptualizes a million-year timeframe for humanity and advocates the colonization of space.
EA began life as a technocratic preoccupation with how charitable donations could be optimized to wring out maximal benefit for causes like global poverty and animal welfare. Over time, it fused with transhumanist and techno-utopian ideals popular in Silicon Valley | Mason Trinca/Getty Images
Oxford University remains an ideological hub for the movement, and has spawned a thriving network of think tanks and research institutes that lobby the government on long-term or existential risks, including the Centre for the Governance of AI (GovAI) and the Future of Humanity Institute at Oxford University.
Other EA-linked organizations include Cambridge University’s Centre for the Study of Existential Risk, which was co-founded by Tallinn and receives funding from his Survival and Flourishing Fund – which is also the primary funder of the Centre for Long Term Resilience, set up by former civil servants in 2020.
The think tanks tend to overlap with leading AI labs, both in terms of membership and policy positions. For example, the founder and former director of GovAI, Allan Dafoe, who remains chair of the advisory board, is also head of long-term AI strategy and governance at DeepMind.
“We are conscious that dual roles of this form warrant careful attention to conflicts of interest,” reads the GovAI website.
GovAI, OpenAI and Anthropic declined to offer comment for this piece. A Google DeepMind spokesperson said: “We are focused on advancing safe and responsible AI.”
The movement has been accruing political capital in the U.K. for some time, says Luke Kemp, a research affiliate at the Centre for the Study of Existential Risk who doesn’t identify as EA. “There’s definitely been a push to place people directly out of existential risk bodies into policymaking positions,” he says.
The movement has been accruing political capital in the U.K. for some time, says Luke Kemp, a researcher at the Centre for the Study of Existential Risk who doesn’t identify as EA | Pool photo by Stefan Rousseau via AFP/Getty Images
CLTR’s head of AI policy, Jess Whittlestone, is in the process of being seconded to DSIT on a one day a week basis to assist on AI policy leading up to the AI Safety Summit, according to a CLTR August update seen by POLITICO. In the interim, she is informally advising several policy teams across DSIT.
A former specialist adviser to the Cabinet Office meanwhile, Markus Anderljung, is now head of policy at GovAI.
Kemp says he has expressed reservations about existential risk organizations attempting to get staff members seconded to government. “We can’t be trusted as objective and fair regulators or scholars, if we have such deep connections to the bodies we’re trying to regulate,” he says.
“I share the concern about AI companies dominating regulatory discussions, and have been advocating for greater independent expert involvement in the summit to reduce risks of regulatory capture,” said CLTR’s Head of AI Policy, Dr Jess Whittlestone. “It is crucial for U.K. AI policy to be informed by diverse perspectives.”
Instead of the risks of existing foundation models like GPT-4, EA-linked groups and AI companies tend to talk up the “emergent” risks of frontier models — a forward-looking stance that nudges the regulatory horizon into the future.
This framing “is a way of suggesting that that’s why you need to have Big Tech in the room – because they are the ones developing these frontier models,” suggests Aitken.
At the frontier
Earlier in July, CLTR and GovAI collaborated on a paper about how to regulate so-called frontier models, alongside members of DeepMind, OpenAI, and Microsoft and academics. The paper explored the controversial idea of licensing the most powerful AI models, a proposal that’s been criticized for its potential to cement the dominance of leading AI firms.
Earlier in July, CLTR and GovAI collaborated on a paper about how to regulate so-called frontier models, alongside members of DeepMind, OpenAI, and Microsoft and academics | Lionel Bonaventure/AFP via Getty Images
CLTR presented the paper to No. 10 with the prime minister’s special advisers on AI and the director and deputy director of DSIT in attendance, according to the CLTR memo.
Such ideas appear to be resonating. In addition to announcing the “Frontier AI Taskforce”, the government said in September that the AI Summit would focus entirely on the regulation of “frontier AI.”
The British government disputes the idea that its AI policy is narrowly focused. “We have engaged extensively with stakeholders in creating our AI regulation white paper, and have received a broad and diverse range of views as part of the recently closed consultation process which we will respond to in due course,” said a spokesperson.
Spokespeople for CLTR and CSER said that both groups focus on risks across the spectrum, from near-term to long-term, while a CLTR spokesperson stressed that it’s an independent and non-partisan think tank.
Some say that it’s the external circumstances that have changed, rather than the effectiveness of the EA lobby. CSER professor Haydn Belfield, who identifies as an EA, says that existential risk think tanks have been petitioning the government for years – on issues like pandemic preparedness and nuclear risk in addition to AI.
Although the government appears more receptive to their overtures now, “I’m not sure we’ve gotten any better at it,” he says. “I just think the world’s gotten worse.”
Update: This story has been updated to clarify Luke Kemp’s job title.
Booming social media application TikTok needs to pay up in Europe for violating children’s privacy.
The popular Chinese-owned app failed to protect children’s personal information by making their accounts publicly accessible by default and insufficiently tackled risks that under-13 users could access its platform, the Irish Data Protection Commission (DPC) said in a decision published Friday.
The regulator slapped TikTok with a €345 million fine for breaching the EU’s landmark privacy law, the General Data Protection Regulation (GDPR).
The penalty comes amid high tensions between the European Union and China, following the EU’s announcement that it plans to probe Chinese state subsidies of electric cars. European Commission Vice President Věra Jourová is also set to visit China next Monday-Tuesday and meet Vice Premier Zhang Guoqing to discuss the two sides’ technology policies, amid growing concerns over Beijing’s data gathering and cyber espionage practices.
“Alone the fine of [€345 million] is a headline sanction to impose but reflects the extent to which the DPC identified child users were exposed to risk in particular arising from TikTok’s decision at the time to default child user accounts to public settings on registration,” said Helen Dixon, the Irish data protection commissioner, in a written statement.
The Irish privacy regulator said that, in the period from July to December 2020, TikTok had unlawfully made accounts of users aged 13 to 17 public by default, effectively making it possible for anyone to watch and comment on videos they posted. The company also did not appropriately assess the risks that users under the age of 13 could gain access to its platform. It also found that TikTok is still pushing teenagers joining the platform to make their accounts and videos public through manipulative pop-ups. The regulator ordered the firm to change these misleading designs, known as dark patterns, within the next three months.
Minors’ accounts could be paired up with unverified adult accounts during the second half of 2020. The authority said the video platform had also previously failed to explain to teenagersthe consequences of making their content and accounts public.
“We respectfully disagree with the decision, particularly the level of the fine imposed,” said Morgan Evans, a TikTok spokesperson. “The [Data Protection Commission]’s criticisms are focused on features and settings that were in place three years ago, and that we made changes to well before the investigation even began, such as setting all under-16 accounts to private by default.”
TikTok added it will comply with the order to change misleading designs by extending such default-privacy settings to accounts of new users aged 16 and 17 later in September. It will also roll out in the next three months changes to the pop-up young users get when they first post a video.
The decision marks the largest-ever privacy fine for TikTok, which is now actively used by 134 million Europeans monthly, and the fifth-largest fine imposed on any tech company under the GDPR.
The platform popular among teenagers has previously faced criticism for insufficiently mitigating harms it poses to its young users, including deadly viral challenges and its addictive algorithm. TikTok — like 18 other online platforms — also now has to limit risks like cyberbullying or face steep fines under the Digital Services Act (DSA).
The costly fine adds to TikTok’s woes in Europe, after it saw a wave of new restrictions on its use earlier this year due to concerns about its connection to China.
The social media app, whose parent company ByteDance is based in Beijing, has struggled to quash concerns over its data security. The company said this month it had started moving its European data to a center within the bloc. Yet, it is still under investigation by the Irish Data Protection Commission over the potentially unlawful transfer of European users’ data to China.
The social media app, whose parent company ByteDance is based in Beijing, has struggled to quash concerns over its data security | Roslan Rahman/AFP via Getty Images
The Irish data authority in 2021 started probing whether TikTok was respecting children’s privacy requirements. TikTok set up its legal EU headquarters in Dublin in late 2020, meaning the Irish privacy watchdog has been the company’s supervisor for the whole bloc under the GDPR.
Other national watchdogs weighed in on the investigation over the summer via the European Data Protection Board (EDPB), after two German privacy agencies and Italy’s regulator disagreed with Ireland’s initial findings. The group instructed Ireland to sanction TikTok for nudging its users toward public accounts in its misleading pop-ups.
The board of European regulators also had “serious doubts” that TikTok’s measures to keep under-13 users off its platform were effective in the second half of 2020. The EDPB said the mechanisms “could be easily circumvented” and that TikTok was not checking ages “in a sufficiently systematic manner” for existing users. The group said, however, that it couldn’t find an infringement because of a lack of informationavailable during their cooperation process.
The United Kingdom’s data regulator in April fined TikTok £12.7 million (€14.8 million) for letting children under 13 on its platform and using their data. The company also received a €750,000 fine in 2021 from the Dutch privacy authority for failing to protect Dutch children by not having a privacy policy in their native language.
WASHINGTON — Senate Majority Leader Chuck Schumer has been talking for months about accomplishing a potentially impossible task: Passing bipartisan legislation within the next year that both encourages the rapid development of artificial intelligence and also mitigates its biggest risks. On Wednesday, he is convening a meeting of some of the country’s most prominent technology executives, among others, to ask them how Congress should do it.
The closed-door forum on Capitol Hill will include almost two dozen tech leaders and advocates, and some of the industry’s biggest names: Meta’s Mark Zuckerberg and Elon Musk, the CEO of X and Tesla, as well as former Microsoft CEO Bill Gates. All 100 senators are invited, but the public is not.
Schumer, D-N.Y., who is leading the forum with Republican Sen. Mike Rounds of South Dakota, won’t necessarily take the tech executives’ advice as he works with Republicans and fellow Democrats to try and ensure some oversight of the burgeoning sector. But he’s hoping that they will give senators some realistic direction as he tries to do what Congress hasn’t done for many years — pass meaningful regulation of the tech industry.
“It’s going to be a fascinating group because they have different points of view,” Schumer said in an interview with The Associated Press ahead of the forum. “Hopefully we can weave it into a little bit of some broad consensus.”
Rounds, who spoke to AP with Schumer on Tuesday, said Congress needs to get ahead of fast-moving AI by making sure it continues to develop “on the positive side” while also taking care of potential issues surrounding data transparency and privacy.
“AI is not going away, and it can do some really good things or it can be a real challenge,” Rounds said.
Schumer says regulation of artificial intelligence will be “one of the most difficult issues we can ever take on,” and ticks off the reasons why: It’s technically complicated, it keeps changing and it “has such a wide, broad effect across the whole world,” he said.
Congress has a lackluster track record when it comes to regulating technology. Lawmakers have lots of proposals — many of them bipartisan — but have mostly failed to agree on major legislation to regulate the industry as powerful tech companies have resisted.
Many lawmakers point to the failure to pass any legislation surrounding social media — bills have stalled in both chambers that would better protect children, regulate activity around elections and mandate stricter privacy standards, among other measures.
“We don’t want to do what we did with social media, which is let the techies figure it out, and we’ll fix it later,” says Senate Intelligence Committee Chairman Mark Warner, D-Va., on the AI push.
Schumer’s bipartisan working group — comprised of Rounds, Democratic Sen. Martin Heinrich of New Mexico and Republican Sen. Todd Young of Indiana — is hoping that the rapid growth of artificial intelligence will create more urgency. Sparked by the release of ChatGPT less than a year ago, businesses across many sectors have been clamoring to apply new generative AI tools that can compose human-like passages of text, program computer code and create novel images, audio and video. The hype over such tools has accelerated worries over its potential societal harms and prompted calls for more transparency in how the data behind the new products is collected and used.
“You have to have some government involvement for guardrails,” Schumer said. “If there are no guardrails, who knows what could happen.”
Schumer says Wednesday’s forum will focus on big ideas like whether the government should be involved at all, and what questions Congress should be asking. Each participant will have three minutes to speak on a topic of their choosing, and Schumer and Rounds will moderate open discussions among the group in the morning and afternoon.
Some of Schumer’s most influential guests, including Musk and Sam Altman, CEO of ChatGPT-maker OpenAI, have signaled more dire concerns evoking popular science fiction about the possibility of humanity losing control to advanced AI systems if the right safeguards are not in place.
But for many lawmakers and the people they represent, AI’s effects on employment and navigating a flood of AI-generated misinformation are more immediate effects.
A recent report from the market research group Forrester projected that generative AI technology could replace 2.4 million jobs in the U.S. by 2030, many of them white-collar roles not affected by previous waves of automation. This year alone the number of lost jobs could total 90,000, the report said, though far more jobs will be reshaped than eliminated.
AI experts have also warned of the growing potential of AI-generated online disinformation to influence elections, including the upcoming 2024 presidential race.
On the more positive side, Rounds says he would like to see the empowerment of new medical technologies that could save lives and allow medical professionals to access more data. That topic is “very personal to me,” Rounds says, after his wife died of cancer two years ago.
Many members of Congress agree that legislation will probably be needed in response to the quick escalation of artificial intelligence tools in government, business and daily life. But there is little consensus on what that should be, or what might be needed. There is also some division — some members worry more about overregulation, and others worry more about the potential risks of an unchecked industry.
“I am involved in this process in large measure to ensure that we act, but we don’t act more boldly or over-broadly than the circumstances require,” says Sen. Young, one of the members of Schumer’s working group. “We should be skeptical of government, which is why I think it’s important that you got Republicans at the table.”
Young says that Schumer has reassured him that he will be “hypersensitive to overshooting as we address some of the potential harms of AI.”
Some Republicans have been wary of following the path of the European Union, which signed off in June on the world’s first set of comprehensive rules for artificial intelligence. The EU’s AI Act will govern any product or service that uses an AI system and classify them according to four levels of risk, from minimal to unacceptable.
A group of corporations has called on EU leaders to rethink the rules, arguing that it could make it harder for companies in the 27-nation bloc to compete with rivals overseas in the use of generative AI.
In the United States, most major tech companies have expressed support for AI regulations, though they don’t necessarily agree on what that means.
“We’ve always said that we think that AI should get regulated,” said Dana Rao, general counsel and chief trust officer for software company Adobe. “We’ve talked to Europe about this for the last four years, helping them think through the AI Act they’re about to pass. There are high-risk use cases for AI that we think the government has a role to play in order to make sure they’re safe for the public and the consumer.”
Adobe, which makes Photoshop and the new AI image-generator Firefly, is proposing its own federal legislation: an “anti-impersonation” bill to protect artists as well as AI developers from the misuse of generative AI tools to produce derivative works without a creator’s consent.
Senators say they will figure out a way to regulate the industry, despite the odds.
“Make no mistake. There will be regulation. The only question is how soon, and what,” said Sen. Richard Blumenthal, D-Conn., at a Tuesday hearing on legislation he wrote with Republican Sen. Josh Hawley of Missouri.
Blumenthal’s framework calls for a new “licensing regime” that would require tech companies to seek licenses for high-risk AI systems. It would also create an independent oversight body led by experts and hold companies liable when their products breach privacy or civil rights or endanger the public.
“Risk-based rules, managing the risks, is what we need to do here,” Blumenthal said.
___
O’Brien reported from Providence, Rhode Island. Associated Press writers Ali Swenson in New York and Kelvin Chan in London contributed to this report.
A Georgia couple who said their baby boy was decapitated during his birth have filed a lawsuit against the independent pathologist who performed the baby’s autopsy, accusing him of posting the procedure on Instagram without their permission.
Jessica Ross and her boyfriend, Treveon Isaiah Taylor, had already filed a lawsuit in August against the OB-GYN who conducted their baby’s delivery. They accused the OB-GYN of failing to follow emergency protocols when the baby’s shoulder became stuck and of applying excessive force that severed their son’s head and killed him.
The lawsuit also accuses the hospital and staff of attempting to cover up what happened. (The hospital denied the allegations of wrongdoing in a previous statement to HuffPost, and the OB-GYN has not responded to repeated requests for comment.)
The Clayton County Police Department has confirmed on social media that it has opened an investigation into the hospital’s alleged failure to report on the nature of the newborn’s death.
In the more recent lawsuit, filed on Sept. 1, attorneys for Ross and Taylor accused Dr. Jackson Gates, the pathologist the couple hired to conduct an autopsy after their child’s death, of taking advantage of the tragedy by posting videos of the procedure to his 11,000 followers on Instagram without their permission. The complaint accuses Gates of invasion of privacy and fraud.
“After suffering one of the most heartbreaking losses any family could ever endure, Jessica Ross and Treveon Isaiah Taylor, Jr. had salt poured into their unfathomable emotional wounds when they discovered that video of their baby’s very graphic medical examination had been made public by the very doctor they entrusted to conduct the autopsy,” attorneys representing the couple said in a joint statement.
Gates, who regularly posts videos of his work for educational and public health purposes, didn’t respond to HuffPost’s request for comment. In a statement on Instagram, he said he would never share the identity of any of his patients. The videos have been taken down, and HuffPost could not immediately verify what Gates had shared.
Jessica Ross, left, says that Dr. Jackson Gates, right, posted images of her baby’s autopsy without the permission of either parent.
Jessica Ross via Family Handout and Dr. Jackson Gates via Instagram
Ross and Taylor had been eagerly looking forward to the birth of their baby boy, whom they were going to name after his father.
On July 9, Ross was admitted to the emergency room of Southern Regional Medical Center in Riverdale, Georgia, after her water broke. The doctor she’d been seeing throughout her pregnancy, Dr. Tracey St. Julian of Premier Woman OB-GYN, delivered the baby on July 10.
But according to the lawsuit, the baby became stuck in Ross’ vaginal canal, forcing her to push for three hours. It wasn’t until after the baby’s head was severed that Ross received a C-section, when the rest of the baby’s body was delivered, the lawsuit states.
Ross and Taylor’s suit alleges that they only found out their baby had been decapitated after the funeral home informed them on July 13, because the doctor, hospital and staff had hidden the cause of death from them.
The couple then decided to have an independent autopsy conducted and paid Gates $2,500.
According to the lawsuit, Gates recorded videos of the baby’s autopsy without their knowledge and then posted them on his public Instagram account on July 14 without permission.
“This video showed in graphic and grisly detail a postmortem examination of the decapitated, severed head of Baby Isaiah,” the lawsuit states.
After that video was removed, the lawsuit alleges, Gates posted two more videos of the baby’s autopsy on July 21 that graphically depicted the baby’s head, body, brain and organs.
The couple felt “shock, anger, humiliation and outrage” after learning about the videos and sent Gates a cease and desist letter on Aug. 10 demanding he take them down, the lawsuit says.
“This is one of the most egregious and outrageous cases of ‘clout chasing’ we have ever encountered,” the couple’s attorneys said. “Dr. Jackson Gates attempted to exploit our clients’ horrific loss to boost his own social media profile, without permission of the family.”
The federal patient privacy law, HIPPA, prohibits medical practitioners from releasing certain identifying information about patients, including names and full-face photographic images. Social media guidelines for pathologists encourage using common sense when they post photos or videos of their work and suggest altering any case details that could inadvertently identify someone.
“None of these alterations are legally required, but, from an ethics perspective, they could help allay anxiety about potential privacy violations while preserving educational value,” a 2016 article in the American Medical Association Journal of Ethics said.
The article notes that pathologists have not routinely sought patient consent when they share educational images in textbooks, lectures and case reports.
“This is a widely accepted long-standing practice in pathology, and, provided that privacy is protected, the authors find no major ethical problems with this practice,” the article said.
In Gates’ posts, he often shows internal organs as he advocates for people to learn the warning signs of disease, get regular cancer screenings or seek a second medical opinion if they have concerns. He’s also spoken out about the health disparities experienced by Black patients, including infant mortality rates.
In one video since the lawsuit was filed, Gates said he prides himself on keeping his practice transparent in order to educate and counsel patients.
“I will never divulge the identity or disclose the identity of any live patient or even deceased patients that come to my care,” Gates said.
In the statement posted on Instagram, he added that the case is now in the hands of law enforcement and various attorneys, as well as the Georgia Composite Medical Board, which he said asked for his photos and videos of the autopsy.
He added he had not expected to find the baby’s head severed when he arrived for the autopsy and described his immediate response.
“I cried and I prayed and then I cried and I prayed because I had NEVER SEEN ANYTHING LIKE THIS — so I completed the autopsy!” he wrote.
EAST LANSING, Mich. — EAST LANSING, Mich. (AP) — Michigan State missed an opportunity to provide some clarity about who was aware of sexual harassment allegations against Mel Tucker and what school leaders knew about them when its athletic director and interim president announced the coach was being suspended without pay.
It was just the latest misstep in a long line of them.
The institution has stumbled from scandal to scandal in recent years, none bigger or more devastating than the one it enabled with disgraced sports doctor Larry Nassar. After a female Michigan State graduate filed a complaint about Nassar’s abuse in 2014, a school investigation found he didn’t violate school policy.
Nassar went on to shatter more lives and it cost the school priceless damage to its reputation along with more than $500 million, including a $4.5 million fine from the Education Department for failing to adequately respond to sexual assault complaints.
And now, Michigan State has another mess.
“It’s a repeat of 2014,” Rachael Denhollander, the first woman to publicly identify herself as a victim of Nassar, said in a telephone interview with The Associated Press. “One of the biggest questions back then was what did the school president and board know.”
Brenda Tracy, an activist and rape survivor, alleged Tucker sexually harassed her during a phone call in April 2022. Tracy filed a complaint with the school’s Title IX office eight months later and that is when athletic director Alan Haller was informed an allegation sexual misconduct had been made against Tucker, school spokeswoman Emily Guerrant said Tuesday.
While the investigation into the allegations was completed July 25, Michigan State interim President Teresa Woodruff and the school’s board of trustees did not know the details until Sunday, when USA Today published its report, Guerrant said.
“They’re either lying or grossly ignorant,” Denhollander told the AP. “They’re using victim protection to cover their own ignorance and that’s nonsense.”
Johanna Kononen, the law and policy director with the Michigan Coalition to End Domestic and Sexual Violence, said Michigan State’s Title IX procedures are confidential and the only people privy to information in the report and investigation are the parties themselves, their advisers and the finder of fact.
Still, Kononen said the process is not completely confidential.
“It seems unlikely, in a case involving such a prominent respondent, that university officials were not aware of the allegations against coach Tucker for the last 10 months,” she told the AP. “This defensive posture is disappointing where MSU is very aware of its historical failure to prioritize and protect its community from sexual impropriety.”
The 51-year-old Tucker, who is married and has two children, said the allegations against him are “completely false” and the intimate phone call he had with Tracy was consensual and outside the scope of both Title IX and school policy.
Tracy’s attorney, Karen Truszkowski, said her client’s identity was disclosed by an outside party, leading to the USA Today report that exposed explicit details of the investigation.
“Brenda Tracy had no intention of publicly disclosing her identity,” Truszkowski said Tuesday. “She was and continues to be committed to complying with and concluding the MSU internal investigative process.”
Guerrant said the university wanted to ensure a fair and comprehensive process and create a safe environment for individuals to come forward without a fear of institutional retaliation or breach of privacy.
“We are dismayed to learn the confidentiality was broken in this case,” she said.
A hearing is scheduled for the week of Oct. 5 determine if Tucker violated the school’s sexual harassment and exploitation policy.
Tucker is in the third year of a $95 million, 10-year contract and if he is fired for cause, the school would not have to pay him what’s remaining on his deal. Michigan State may fire Tucker for cause if he “engages in any conduct which constitutes moral turpitude or which, in the University’s sole judgement, would tend to bring public disrespect, contempt or ridicule upon the university,” according to his contract.
Officially, the school said “unprofessional behavior and not living up to the core values of the department and university,” was the reason Tucker was suspended.
Tracy is known for her work with college teams, educating athletes about sexual violence. Michigan State paid her $10,000 to share her story with the team.
“By any metric, even if it was consensual, what he did was a violation of the school’s ethics policy because he initiated sexual relations with a contracted employee,” Denhollander said. “When he admitted that in March, he could have been immediately fired if the proper processes were in place at Michigan State and if the board was trained — or if they cared about this.”
___
Follow Larry Lage at https://twitter.com/larrylage
___
AP college football: https://apnews.com/hub/college-football and https://apnews.com/hub/ap-top-25-college-football-poll
BOSTON — Cars are getting an “F” in data privacy. Most major manufacturers admit they may be selling your personal information, a new study finds, with half also saying they would share it with the government or law enforcement without a court order.
The proliferation of sensors in automobiles — from telematics to fully digitized control consoles — has made them prodigious data-collection hubs.
But drivers are given little or no control over the personal data their vehicles collect, researchers for the nonprofit Mozilla Foundation said Wednesday in their latest “Privacy Not Included” survey Security standards are also vague, a big concern given automakers’ track record of susceptibility to hacking.
“Cars seem to have really flown under the privacy radar and I’m really hoping that we can help remedy that because they are truly awful,” said Jen Caltrider, the study’s research lead. “Cars have microphones and people have all kinds of sensitive conversations in them. Cars have cameras that face inward and outward.”
Unless they opt for a used, pre-digital model, car buyers “just don’t have a lot of options,” Caltrider said.
Cars scored worst for privacy among more than a dozen product categories — including fitness trackers, reproductive-health apps, smart speakers and other connected home appliances — that Mozilla has studied since 2017.
Not one of the 25 car brands whose privacy notices were reviewed — chosen for their popularity in Europe and North America — met the minimum privacy standards of Mozilla, which promotes open-source, public interest technologies and maintains the Firefox browser. By contrast, 37% of the mental health apps the non-profit reviewed this year did.
Nineteen automakers say they can sell your personal data, their notices reveal. Half will share your information with government or law enforcement in response to a “request” — as opposed to requiring a court order. Only two — Renault and Dacia, which are not sold in North America — offer drivers the option to have their data deleted.
“Increasingly, most cars are wiretaps on wheels,” said Albert Fox Cahn, a technology and human rights fellow at Harvard’s Carr Center for Human Rights Policy. “The electronics that drivers pay more and more money to install are collecting more and more data on them and their passengers.”
“There is something uniquely invasive about transforming the privacy of one’s car into a corporate surveillance space,” he added.
A trade group representing the makers of most cars and light trucks sold in the U.S., the Alliance for Automotive Innovation, took issue with that characterization. In a letter sent Tuesday to U.S. House and Senate leadership, it said it shares “the goal of protecting the privacy of consumers.”
It called for a federal privacy law, saying a “patchwork of state privacy laws creates confusion among consumers about their privacy rights and makes compliance unnecessarily difficult.” The absence of such a law lets connected devices and smartphones amass data for tailored ad targeting and other marketing — while also raising the odds of massive information theft through cybersecurity breaches.
The Associated Press asked the Alliance, which has resisted efforts to provide car owners and independent repair shops with access to onboard data, if it supports allowing car buyers to automatically opt out of data collection — and granting them the option of having collected data deleted. Spokesman Brian Weiss said that for safety reasons the group “has concerns” about letting customers completely opt out — but does endorse giving them greater control over how the data is used in marketing and by third parties.
In a 2020 Pew Research survey, 52% of Americans said they had opted against using a product or service because they were worried about the amount of personal information it would collect about them.
On security, Mozilla’s minimum standards include encrypting all personal information on a car. The researchers said most car brands ignored their emailed questions on the matter, those that did offering partial, unsatisfactory responses.
Japan-based Nissan astounded researchers with the level of honesty and detailed breakdowns of data collection its privacy notice provides, a stark contrast with Big Tech companies such as Facebook or Google. “Sensitive personal information” collected includes driver’s license numbers, immigration status, race, sexual orientation and health diagnoses.
Further, Nissan says it can share “inferences” drawn from the data to create profiles “reflecting the consumer’s preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.”
It was among six car companies that said they could collect “genetic information” or “genetic characteristics,” the researchers found.
Nissan also said it collected information on “sexual activity.” It didn’t explain how.
The all-electric Tesla brand scored high on Mozilla’s “creepiness” index. If an owner opts out of data collection, Tesla’s privacy notice says the company may not be able to notify drivers “in real time” of issues that could result in “reduced functionality, serious damage, or inoperability.”
Neither Nissan nor Tesla immediately responded to questions about their practices.
Mozilla’s Caltrider credited laws like the 27-nation European Union’s General Data Protection Regulation and California’s Consumer Privacy Act for compelling carmakers to provide existing data collection information.
It’s a start, she said, by raising awareness among consumers just as occurred in the 2010s when a consumer backlash prompted TV makers to offer more alternatives to surveillance-heavy connected displays.
BOSTON — Cars are getting an “F” in data privacy. Most major manufacturers admit they may be selling your personal information, a new study finds, with half also saying they would share it with the government or law enforcement without a court order.
The proliferation of sensors in automobiles — from telematics to fully digitized control consoles — has made them prodigious data-collection hubs.
But drivers are given little or no control over the personal data their vehicles collect, researchers for the nonprofit Mozilla Foundation said Wednesday in their latest “Privacy Not Included” survey Security standards are also vague, a big concern given automakers’ track record of susceptibility to hacking.
“Cars seem to have really flown under the privacy radar and I’m really hoping that we can help remedy that because they are truly awful,” said Jen Caltrider, the study’s research lead. “Cars have microphones and people have all kinds of sensitive conversations in them. Cars have cameras that face inward and outward.”
Unless they opt for a used, pre-digital model, car buyers “just don’t have a lot of options,” Caltrider said.
Cars scored worst for privacy among more than a dozen product categories — including fitness trackers, reproductive-health apps, smart speakers and other connected home appliances — that Mozilla has studied since 2017.
Not one of the 25 car brands whose privacy notices were reviewed — chosen for their popularity in Europe and North America — met the minimum privacy standards of Mozilla, which promotes open-source, public interest technologies and maintains the Firefox browser. By contrast, 37% of the mental health apps the non-profit reviewed this year did.
Nineteen automakers say they can sell your personal data, their notices reveal. Half will share your information with government or law enforcement in response to a “request” — as opposed to requiring a court order. Only two — Renault and Dacia, which are not sold in North America — offer drivers the option to have their data deleted.
“Increasingly, most cars are wiretaps on wheels,” said Albert Fox Cahn, a technology and human rights fellow at Harvard’s Carr Center for Human Rights Policy. “The electronics that drivers pay more and more money to install are collecting more and more data on them and their passengers.”
“There is something uniquely invasive about transforming the privacy of one’s car into a corporate surveillance space,” he added.
A trade group representing the makers of most cars and light trucks sold in the U.S., the Alliance for Automotive Innovation, took issue with that characterization. In a letter sent Tuesday to U.S. House and Senate leadership, it said it shares “the goal of protecting the privacy of consumers.”
It called for a federal privacy law, saying a “patchwork of state privacy laws creates confusion among consumers about their privacy rights and makes compliance unnecessarily difficult.” The absence of such a law lets connected devices and smartphones amass data for tailored ad targeting and other marketing — while also raising the odds of massive information theft through cybersecurity breaches.
The Associated Press asked the Alliance, which has resisted efforts to provide car owners and independent repair shops with access to onboard data, if it supports allowing car buyers to automatically opt out of data collection — and granting them the option of having collected data deleted. Spokesman Brian Weiss said that for safety reasons the group “has concerns” about letting customers completely opt out — but does endorse giving them greater control over how the data is used in marketing and by third parties.
In a 2020 Pew Research survey, 52% of Americans said they had opted against using a product or service because they were worried about the amount of personal information it would collect about them.
On security, Mozilla’s minimum standards include encrypting all personal information on a car. The researchers said most car brands ignored their emailed questions on the matter, those that did offering partial, unsatisfactory responses.
Japan-based Nissan astounded researchers with the level of honesty and detailed breakdowns of data collection its privacy notice provides, a stark contrast with Big Tech companies such as Facebook or Google. “Sensitive personal information” collected includes driver’s license numbers, immigration status, race, sexual orientation and health diagnoses.
Further, Nissan says it can share “inferences” drawn from the data to create profiles “reflecting the consumer’s preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.”
It was among six car companies that said they could collect “genetic information” or “genetic characteristics,” the researchers found.
Nissan also said it collected information on “sexual activity.” It didn’t explain how.
The all-electric Tesla brand scored high on Mozilla’s “creepiness” index. If an owner opts out of data collection, Tesla’s privacy notice says the company may not be able to notify drivers “in real time” of issues that could result in “reduced functionality, serious damage, or inoperability.”
Neither Nissan nor Tesla immediately responded to questions about their practices.
Mozilla’s Caltrider credited laws like the 27-nation European Union’s General Data Protection Regulation and California’s Consumer Privacy Act for compelling carmakers to provide existing data collection information.
It’s a start, she said, by raising awareness among consumers just as occurred in the 2010s when a consumer backlash prompted TV makers to offer more alternatives to surveillance-heavy connected displays.
TikTok says operations have started at the first of its three European data centers, part of the popular Chinese owned app’s project to ease Western fears about privacy risks
ByABC News
September 5, 2023, 7:38 AM
FILE – The TikTok logo is seen on a mobile phone in front of a computer screen which displays the TikTok home screen, on March 18, 2023, in Boston. TikTok said Tuesday Sept. 5, 2023 that operations have started at the first of its three European data centers, part of the popular Chinese owned app’s project to ease Western fears about privacy risks. (AP Photo/Michael Dwyer, File)
The Associated Press
LONDON — TikTok said Tuesday that operations are underway at the first of its three European data centers, part of the popular Chinese owned app’s effort to ease Western fears about privacy risks.
The video sharing app said it began transferring European user information to a data center in Dublin. Two more data centers, another in Ireland and one in Norway, are under construction, TikTok said in an update on its plan to localize European user data, dubbed Project Clover.
TikTok has been under scrutiny by European and American regulators over concerns that sensitive user data may end up in China. TikTok is owned by ByteDance, a Chinese company that moved its headquarters to Singapore in 2020.
TikTok unveiled its plan earlier this year to store data in Europe, where there are stringent privacy laws, after a slew of Western governments banned the app from official devices.
NCC Group, a British cybersecurity company, is overseeing the project, TikTok’s vice president of public policy for Europe, Theo Bertram, said in a blog post.
NCC Group will check data traffic to make sure that only approved employees “can access limited data types” and carry out “real-time monitoring” to detect and respond to suspicious access attempts, Bertram said.
“All of these controls and operations are designed to ensure that the data of our European users is safeguarded in a specially-designed protective environment, and can only be accessed by approved employees subject to strict independent oversight and verification,” Bertram said.