After winding up in a shelter in December 2021, this dog has spent the last two years hoping that one day he will find his forever home. Finally, his wish has come true this holiday season as he’s been adopted just in time for Christmas.
The lovable pit bull mix found his world turned upside down when he ended up in a shelter just four days before Christmas in 2021, as his owner sadly died. The staff at Associated Human Societies (AHS) in Tinton Falls, New Jersey, were devastated for the poor pup, who would no longer be enjoying scraps of turkey or opening presents with his owner.
Despite his heartbreaking experience, Mack continued smiling and brightening people’s days at the shelter. Sandy Hickman, the media coordinator for the AHS Popcorn Park Shelter, told Newsweek that “to know Mack is to love him.”
Staff spent the subsequent two years trying to find a home for Mack so he could live the rest of his years in peace and happiness. It may have taken longer than planned, but that day finally came in November 2023, and this Christmas looks a whole lot brighter for Mack.
Mack the pit bull mix at an adoption event before his adoption in November 2023. Mack wound up in a New Jersey shelter on December 21, 2021 when his owner sadly passed away. Associated Humane Popcorn Park Shelter
Hickman continued: “Mack was very big, happy, and healthy when he came to our shelter. He received lots of attention from our staff and volunteers who walked him and spent time with him on a regular basis. He attended several adoption events as well.
“His adopter loved him immediately, and she came in several times to spend time with him so she could get to know him prior to taking him home for good.”
While Mack’s story ends on a happier note, that isn’t true for every shelter animal. With an estimated 6.3 million companion animals winding up in shelters across the country each year, the American Society for the Prevention of Cruelty to Animals believes that only 4.1 million of those eventually get adopted.
With such an influx of animals in need of a home, around 3.1 million are thought to be dogs, there’s little surprise that shelters are struggling to cope with the intake. As Newsweek has previously reported, many shelters are way over capacity and seeing a substantial drop in adoptions. It’s thought that this is in part due to expensive living costs and unethical breeding.
After seeing one of their long-term residents finally find a home, the shelter shared pictures of Mack smiling gleefully on Facebook, showing that he’s now “living his very best life.” The post warmed many hearts and generated more than 1,600 reactions and 170 comments in a matter of days.
While so many people were delighted by the news that Mack has a home for the holidays, the shelter has many more dogs waiting for their day to come.
Mack is a pit bull mix who had to wait two years before finding a forever home. Mack received plenty of interest while at the shelter, but not enough to finally get adopted until November 2023. Associated Humane Popcorn Park Shelter
“All three of our AHS shelters in New Jersey have so many wonderful dogs like Mack, who have been waiting for so long to be noticed,” Hickman said. “Mack is one of so many pit bull-types in shelters and we feel that there is a stigma attached to the breed, which negatively impacts their chances for adoption.
“They are all unique in their own way and we ask that potential adopters keep an open mind, meet the ones that are a little older or a little shy. You would be surprised at what you find when you spend some time with a shelter dog outside of the kennel environment and not judge them based solely on age or breed.”
Among the delighted comments on the post, one Facebook user wrote: “OMG awesome news! Happy life Mack!”
Another person responded: “So happy for Mack.”
While one person commented: “I love these adoption stories. You can just see the happiness on the dog’s face.”
Do you have any amazing rescue or adoption stories you want to share? Send them to life@newsweek.com with some details and they could appear in our Pet of the Week lineup.
Uncommon Knowledge
Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.
Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.
A new lawsuit filed by the New Mexico attorney general against Meta, the parent company of Facebook and Instagram, accuses the tech giant of using algorithms that create a marketplace for the sexual exploitation of children. Jo Ling Kent has more.
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.
Facebook and Instagram are steering children to explicit content even when no interest is expressed, and are enabling child predators to find and contact minors, New Mexico Attorney General Raúl Torrez claimed Wednesday in announcing a lawsuit against parent company Meta Platforms and CEO Mark Zuckerberg.
Children are pressed by predators into providing photos of themselves or to participate in pornographic videos, alleges the civil suit filed on Tuesday in New Mexico state court. Torrez claimed that rather than providing “safe spaces for children,” the platforms are allowing predators to trade child pornography and solicit children for sex.
Meta has not implemented protections due to the potential hit on its advertising revenue, according to Torrez, whose office filed the lawsuit after an undercover investigation in which it set up phony accounts of fictional teens and preteens, using photographs generated by artificial intelligence. Meta’s algorithms recommended sexual content to those accounts, which were also subject to a stream of explicit messages and propositions from adults on the platforms.
“Meta has allowed Facebook and Instagram to become a marketplace for predators in search of children upon whom to prey,” the lawsuit alleges.
One account had investigators posting images of a fictional 13-year-old girl in Albuquerque, New Mexico, drawing thousands of adult followers. On Facebook Messenger, the account’s chats received graphic photos and videos three to four times a week, according to the complaint.
“Mr. Zuckerberg and other Meta executives are aware of the serious harm their products can pose to young users, and yet they have failed to make sufficient changes to their platforms that would prevent the sexual exploitation of children,” Torrez said in a statement.
He added, “Despite repeated assurances to Congress and the public that they can be trusted to police themselves, it is clear that Meta’s executives continue to prioritize engagement and ad revenue over the safety of the most vulnerable members of our society.”
The state’s suit cited multiple recent criminal cases in New Mexico, including one perpetrator accused of recruiting more than 100 minor victims through Facebook.
Meta did not immediately respond to a request for comment. However, earlier this month Meta posted a blog post about its work to fight child predators, and told CBS News that it has hired specialists focused on online child safety and is developing new technology to “root out predators.”
“Child exploitation is a horrific crime and online predators are determined criminals,” Meta said on December 1. “They use multiple apps and websites, test each platforms’ defenses, and adapt quickly. We work hard to stay ahead.”
The New Mexico suit comes in the wake of a suit filed in October by 41 other states and the District of Columbia contending Meta had deliberately engineered Instagram and Facebook to be addictive to children and teens.
Dr. Joan Donovan, a former Harvard disinformation scholar, is claiming in a new disclosure that the university’s cozy relationship with alumni Mark Zuckerberg and his wife Priscilla Chan, led to her termination.
In the whistleblower declaration made public on Monday, Donovan claims her studies on media manipulation campaigns were restricted following a $500 million donation from the Chan Zuckerberg Initiative to fund an artificial intelligence center in 2021.
“From that very day forward, I was treated differently by the university to the point where I lost my job,” Donovan told The Logic.
The disclosure was sent on Donovan’s behalf to Harvard and U.S. Education Secretary Miguel Cardona by Whistleblower Aid last week.
The Chan Zuckerberg Initiative is a philanthropic organization run by Zuckerberg and Chan.
Donovan claims she was terminated in 2022 after Harvard shut down her research. She had worked at the university since 2018 running the Technology and Social Change Research Project for the Shorenstein Center at Harvard University’s John F. Kennedy School of Government.
The disclosure calls for an investigation into the Kennedy School and “all appropriate corrective action.”
Harvard, meanwhile, has refused Donovan’s allegations and claims she wasn’t fired.
“Allegations of unfair treatment and donor interference are false. The narrative is full of inaccuracies and baseless insinuations, particularly the suggestion that Harvard Kennedy School allowed Facebook to dictate its approach to research,” said Harvard spokesperson James Francis Smith in a statement to CNN.
“By longstanding policy to uphold academic standards, all research projects at Harvard Kennedy School need to be led by faculty members. Joan Donovan was hired as a staff member (not a faculty member) to manage a media manipulation project. When the original faculty leader of the project left Harvard, the School tried for some time to identify another faculty member who had time and interest to lead the project. After that effort did not succeed, the project was given more than a year to wind down. Joan Donovan was not fired, and most members of the research team chose to remain at the School in new roles,” he said.
The disclosure notes that the Chan Zuckerberg donation came shortly after the 2021 “Facebook Papers” whistleblower complaint from former Facebook employee Frances Haugen.
Harvard made the papers public with the help of Donovan, who archived the documents for public research.
Since Donovan’s departure from Harvard, she announced in August she is joining Boston University’s College of Communication as an assistant professor.
The company that owns Facebook and Instagram has for years relied on both social media platforms to keep children and teenagers engaged for as long as possible in order to gather personal data and sell it to advertisers, a group of state prosecutors alleged in a recently unsealed complaint.
Attorneys general in 33 states filed a federal lawsuit against Meta in October, although the details at the time were not immediately released. But the complaint, unsealed Wednesday, unveils more specifics, such as allegations from the state prosecutors that Meta harmed young users on Facebook and Instagram through the use of highly manipulative algorithms and technological tools.
These techniques were allegedly deliberately deployed by Meta to attract and sustain engagement, as it collected personal information for advertisers, including from children without parental consent — which is required by law, according to the lawsuit.
Attorneys general from states ranging from California to Wisconsin are part of the lawsuit. They allege compulsive use of Facebook or IG by teens and children can cause physical and mental harm, according to the 233-page complaint.
State prosecutors built their case, in part, using snippets of emails, earnings call transcripts and other internal communications — all of which suggest the extreme value of young users’ personal information and time to company profits.
In an emailed statement from October when the joint suit was filed, Meta said it was disappointed by the route taken by the attorneys general.
Meta is determined to provide teens with “safe, positive experiences online, and have already introduced over 30 tools to support teens and their families,” the company said at the time.
In a Monday statement, a Meta spokesperson said, “The complaint mischaracterizes our work using selective quotes and cherry-picked documents.”
“Time spent”
State prosecutors allege in the complaint that Meta’s business strategy for more growth and profit is based on so-called “time spent,” which refers to how long the website can keep users engaged in posts, pictures, videos and other content. The longer a user stays on Facebook or IG, the more personal data the platform can collect, according to the complaint.
“Increasing the time spent on Meta’s platforms increases the effective delivery of targeted ads — a pivotal factor in Meta’s ability to generate revenue,” the complaint reads.
One of the ways Meta keeps a user on its social media platforms is deploying a special technology called “recommendation algorithms,” the complaint alleges.
“These algorithms do not promote any specific message by Meta,” the lawsuit claims. “Rather, the algorithms function on a user-by-user basis, detecting the material each individual is likely to engage with and then increasingly displaying similar material to maximize the time spent and user data collected on the platforms.”
Users under 13
Meta collects personal data on all Facebook and Instagram users, including those who are under the age of consent, state prosecutors allege. The tech giant collects the data even though the platforms did not get parental consent from users who are 13 or younger, the lawsuit claims.
Collecting the data violates the federal Children’s Online Privacy Protection Rule of 1998, prosecutors allege.
Meta said in a statement that no one under 13 is allowed to have an account on Instagram, and that the company deletes accounts from underage users whenever it finds them.
“However, verifying the age of people online is a complex industry challenge,” the company said. “Many people — particularly those under the age of 13 — don’t have an ID, for example. That’s why Meta is supporting federal legislation that requires app stores to get parents’ approval whenever their teens under 16 download apps.”
The issue of how Meta platforms impact young children became front and center in 2021 when Meta employee-turned whistleblower Frances Haugen shared documents from internal company research. In an interview with CBS News’ Scott Pelley, Haugen noted data indicating Instagram worsens suicidal thoughts and eating disorders for certain teenage girls.
“Meta knows that what it is doing is bad for kids — period,” California Attorney General Rob Bonta alleged in a statement Monday. “Thanks to our unredacted federal complaint, it is now there in black and white, and it is damning.”
Khristopher J. Brooks is a reporter for CBS MoneyWatch. He previously worked as a reporter for the Omaha World-Herald, Newsday and the Florida Times-Union. His reporting primarily focuses on the U.S. housing market, the business of sports and bankruptcy.
Facebook parent Meta Platforms deliberately engineered its social platforms to hook kids and knew—but never disclosed—it had received millions of complaints about underage users on Instagram but only disabled a fraction of those accounts, according to a newly unsealed legal complaint described in reports from The Wall Street Journal and The New York Times.
The complaint, originally made public in redacted form, was the opening salvo in a lawsuit filed in late October by the attorneys general of 33 states.
Company documents cited in the complaint described several Meta officials acknowledging the company designed its products to exploit shortcomings in youthful psychology such as impulsive behavior, susceptibility to peer pressure and the underestimation of risks, according to the reports.
Others acknowledged Facebook and Instagram also were popular with children under age 13 who, per company policy, were not allowed to use the service.
Meta said in a statement to The Associated Press that the complaint misrepresents its work over the past decade to make the online experience safe for teens, noting it has “over 30 tools to support them and their parents.”
With respect to barring younger users from the service, Meta argued age verification is a “complex industry challenge.”
Instead, Meta said it favors shifting the burden of policing underage usage to app stores and parents, specifically by supporting federal legislation that would require app stores to obtain parental approval whenever youths under 16 download apps.
One Facebook safety executive alluded to the possibility that cracking down on younger users might hurt the company’s business in a 2019 email, according to the Journal report.
But a year later, the same executive expressed frustration that while Facebook readily studied the usage of underage users for business reasons, it didn’t show the same enthusiasm for ways to identify younger kids and remove them from its platforms, the Journal reported.
The complaint noted that at times Meta has a backlog of up to 2.5 million accounts of younger children awaiting action, according to the newspaper reports.
Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.
Sam Altman is returning to OpenAI but power at the artificial-intelligence start-up is still set to be held by its board. The members who fired Altman are largely out and their replacements suggest the new board will be less inclined to slow or block the development of AI technology.
Continue reading this article with a Barron’s subscription.
The Chinese government has built up the world’s largest known online disinformation operation and is using it to harass US residents, politicians, and businesses—at times threatening its targets with violence, a CNN review of court documents and public disclosures by social media companies has found.
The onslaught of attacks – often of a vile and deeply personal nature – is part of a well-organized, increasingly brazen Chinese government intimidation campaign targeting people in the United States, documents show.
The US State Department says the tactics are part of a broader multi-billion-dollar effort to shape the world’s information environment and silence critics of Beijing that has expanded under President Xi Jinping. On Wednesday, President Biden is due to meet Xi at a summit in San Francisco.
Victims face a barrage of tens of thousands of social media posts that call them traitors, dogs, and racist and homophobic slurs. They say it’s all part of an effort to drive them into a state of constant fear and paranoia.
Often, these victims don’t know where to turn. Some have spoken to law enforcement, including the FBI – but little has been done. While tech and social media companies have shut down thousands of accounts targeting these victims, they’re outpaced by a slew of new accounts emerging virtually every day.
Known as “Spamouflage” or “Dragonbridge,” the network’s hundreds of thousands of accounts spread across every major social media platform have not only harassed Americans who have criticized the Chinese Communist Party, but have also sought to discredit US politicians, disparage American companies at odds with China’s interests and hijack online conversations around the globe that could portray the CCP in a negative light.
Private researchers have tracked the network since its discovery more than four years ago, but only in recent months have federal prosecutors and Facebook’s parent company Meta publicly concluded that the operation has ties to Chinese police.
Meta announced in August it had taken down a cluster of nearly 8,000 accounts attributed to this group in the second quarter of 2023 alone. Google, which owns YouTube, told CNN it had shut down more than 100,000 associated accounts in recent years, while X, formerly known as Twitter, has blocked hundreds of thousands of China “state-backed” or “state-linked” accounts, according to company blogs.
Still, given the relatively low cost of such operations, experts who monitor disinformation warn the Chinese government will continue to use these tactics to try to bend online discussions closer to the CCP’s preferred narrative, which frequently entails trying to undermine the US and democratic values.
“We might think that this is confined to certain chatrooms, or this platform or that platform, but it’s expanding across the board,” Rep. Mike Gallagher, chairman of the House Select Committee on the CCP, told CNN. “And it’s only a matter of time before it happens to that average American citizen who doesn’t think it’s their problem right now.”
When trolls disrupted an anti-communism Zoom event organized by New York-based activist Chen Pokong in January 2021, he had little doubt who was responsible. The trolls mocked participants and threatened that one victim would “die miserably.” Their conduct reminded Chen of repression by the government of China, where he spent nearly five years in prison for pro-democracy work.
But his suspicions about who was behind the interruption were solidified when the US Department of Justice charged more than 30 Chinese officials earlier this year with running a sprawling disinformation operation that had targeted dissidents in the US, including those in the Zoom meeting Chen says he hosted in 2021.
It was just one of multiple indictments the Justice Department unsealed in April exposing alleged Chinese government plots to target its perceived critics and enemies, while impugning the sovereignty of the United States. Two alleged Chinese operatives were charged with running an “undeclared police station” in New York City. Last year, another indictment outlined how Chinese agents allegedly tried to derail the congressional campaign of a Chinese dissident.
“They want to deprive my freedom of speech, so I feel like it’s not only an attack on me,” said Chen, who was ejected from his own meeting during the disruption. “They also attack America.”
The DOJ complaint named 34 individual officers with China’s Ministry of Public Security and published photographs of them at computers, allegedly working on the disinformation campaign known as the “912 Special Project Working Group.” The operation, primarily based in Beijing, appears to involve “hundreds” of MPS officers across the country, according to an FBI agent’s affidavit.
The complaint does not refer to the cluster of fake accounts as “Spamouflage,” but private researchers and a spokesperson for Meta told CNN that the social media activity described by the DOJ is part of that network. As part of a mission “to manipulate public perceptions of [China], the Group uses its misattributed social media accounts to threaten, harass and intimidate specific victims,” the complaint states.
When asked about Spamouflage’s reported links to Chinese law enforcement, a spokesperson for China’s embassy in Washington, Liu Pengyu, denied the allegations.
“China always respects the sovereignty of other countries. The US accusation has no factual evidence or legal basis. It is entirely politically motivated. China firmly opposes it,” Liu said in a statement to CNN. He claimed that the US “invented the weaponizing of the global information space.”
A report released by Meta in August illustrates how the posts from the network often align with the workday hours in China. The report described “bursts of activity in the mid-morning and early afternoon, Beijing time, with breaks for lunch and supper, and then a final burst of activity in the evening.”
And while Meta detected posts from various regions in China, the company and other researchers have found centralized coordination that relentlessly pushed identical messages across multiple social media platforms, sometimes repeatedly insulting the same individuals who have questioned the Chinese government.
One of those individuals is Jiayang Fan, a journalist for The New Yorker who told CNN she began facing harassment by the network when she covered pro-democracy protests in Hong Kong in 2019.
Attacks directed at Fan – which ranged from cartoons of her painting her face white as though rejecting her identity to accusations that she killed her mother for profit – carry telltale signs of the Spamouflage network, said Darren Linvill of the Media Forensics Hub at Clemson University. Linvill’s group found more than 12,000 tweets attacking Fan using the same hashtag, #TraitorJiayangFan.
Although she hasn’t lived in China since she was a child, Fan believes such messages have been levelled against her to spark fear and silence others.
“This is part of a very old Chinese Communist Party playbook to intimidate offenders and aspiring offenders,” said Fan, who questioned what her distant relatives in China may think when they see such content. “It is uncomfortable for me to know that they are seeing these portrayals of me and have no idea what to believe.”
Experts who track online influence campaigns say there are signs of a shift in China’s strategy in recent years. In the past, the Spamouflage network mostly focused on issues domestically relevant to China. However, more recently, accounts tied to the group have been stoking controversy around global issues, including developments in the United States.
Spamouflage accounts – some of which posed as Texas residents – called for protests of plans to build a rare-earths processing facility in Texas and spread negative messages about a separate US manufacturing company, according to a report by cybersecurity firm Mandiant last year. The report also described how the campaign promoted negative content about the Biden administration’s efforts to hasten mineral production that would curb US reliance on China.
Other posts by the network have referenced how “racism is an indelible shame on American democracy” and how the US committed “cultural genocide against the Indians,” according to a Meta report in August. Another post claimed that former House Speaker Nancy Pelosi is “riddled with scandals.”
Chinese government-linked accounts have also posted messages that included a call to “kill” President Biden, a cartoon featuring the so-called QAnon Shaman who rioted at the US Capitol as a symbol of “western style democracy,” and a post that suggested US defense contractors profit off the deaths of innocent people, according to a Department of Homeland Security report in April obtained through a records request.
The DOJ complaint filed against Chinese officials alleged that last year they sought to take advantage of the second anniversary of George Floyd’s death and post on social media about his murder to “reveal the law enforcement brutality” in the US. They also received a task to “work on 2022 US midterm elections and criticize American democracy.”
Spamouflage is “evolving in tactics. It’s evolving in themes,” said Ben Nimmo, the global lead for threat intelligence at Meta. “Our job is to keep on raising our defenses and keep on telling people about it, especially as we get closer to the election year.”
Yet as social media companies race to stop disinformation and the US government files complaints against those allegedly responsible, accountability can be elusive.
“This is the rub with a lot of cybercrimes, that it becomes very, very difficult to actually put the perpetrators in jail,” said Lindsay Gorman, the head of technology and geopolitics at the German Marshall Fund’s Alliance for Securing Democracy.
But, Gorman added, that doesn’t mean there are no consequences for China.
“Even if individuals have a degree of impunity because they are never planning on coming to the United States anyway, that doesn’t mean that the party operation has impunity here – certainly not in terms of public opinion, certainly not in terms of US-China relations,” she said.
Meta, Google, and other companies that have published reports outing Spamouflage stress that most of the social media accounts within the network receive little or no engagement, meaning they rarely go viral.
But Linvill of Clemson University argues that the network uses a unique strategy of “flooding” conversations with so many comments that posts from genuine users receive less attention. This includes posting on platforms typically not associated with disinformation, such as Pinterest.
“They are operating thousands of accounts at a time on a given platform, often to drown out conversations, just with sheer volume of messaging,” Linvill said. “When we think of disinformation, we often think of pushing ideas on users and making ideas more salient, whereas what China is doing is the opposite. They are trying to remove conversations from social media.”
When Beijing hosted the 2022 Winter Olympics, for example, human rights groups began promoting the hashtag #GenocideGames to bring attention to accusations that China has detained more than a million Uyghurs and other Muslim minorities in internment camps.
But then something surprising happened. Accounts that Linvill and his colleagues believed were part of Spamouflage started tweeting the hashtag too.
It might be counterintuitive for a pro-Chinese government group to start spreading a hashtag that brought attention to the Chinese government’s human rights’ abuses, Linvill explained. But by using the hashtag repeatedly in tweets that had nothing to do with the issue itself, Spamouflage was able to reduce views on the legitimate messages.
Jiajun Qiu, whose academic work focused on elections and who fled China in 2016, showed CNN what happens when he types his name into X, formerly known as Twitter. There are sometimes dozens of accounts pretending to be him by using his name and photo.
They are designed by the operators of Spamouflage, Linvill explained, to confuse people and prevent them from finding Qiu’s real account by muddying the waters.
Now living in Virginia, Qiu runs a pro-democracy YouTube channel and has faced an onslaught of homophobic, racist and bizarre insults from social media accounts that Linvill’s team and others have tied to Spamouflage.
Some accounts have posted cartoons that convey Qiu as an insect working on behalf of the US government. Another image depicts him being stomped by a cartoon Jesus. Yet another paints him as a dog on the leash of an American rat.
“I tell people the truth, so they want to do anything possible to insult me,” Qiu said.
Linvill and his team have tracked hundreds of these cartoons across the internet, and said they are a “tell” of Spamouflage. Cartoons, Linvill explained, can be more effective than text because they are “eye-catching” and “you have to stop and look at it.” In addition, these original cartoons can easily be translated into hundreds of languages at a very low cost.
Beyond the online smears, Qiu says he has also faced threats via other online messages and escalatory calls from unidentified sources who he believes have ties to the Chinese government. One anonymous message told him he would be arrested and brought to justice for breaking Chinese law. An email referenced the church he attends in Manassas, Virginia and said, “for his own safety and that of the worshippers, he would do well to find another place to stay.”
Qiu told CNN that the FBI has interviewed him four times regarding these threats, and that he has been instructed to contact local police if he is ever followed.
An ongoing Domino’s free pizza promo turned into chaos as people walked out of stores with multiple pizzas that didn’t cost them anything due to an exploitable glitch that got spread around on social media. At least one manager, amid the free pizza chaos, reportedly texted a Domino’s employee: “Don’t make any free pizzas. Cancel them. As soon as one pops on the screen check and see if it’s a free emergency pizza. If it is cancel it ASAP.”
The Top 10 Most-Played Games On Steam Deck: October 2023 Edition
In early October, Domino’s Pizza launched a new promo called the “Emergency Pizza” program. Folks who ordered a qualifying pizza from the company would receive a code that could be used for one free medium pizza at a later date. A backup, “emergency” pizza, if you will. The promo, like most other fast food promotional events, went under the radar for most folks and was working fine until this week. That’s when things went wrong.
While nothing has yet been confirmed, it appears people were able to figure out (or someone accidentally shared) Emergency Pizza codes that could be used over and over again by the same customer. This, obviously, isn’t how the program was intended to work. These codes quickly began to spread online, with even popular deals tweeter Wario64 posting them on November 9.
“58 pizzas and all are carryout and some are…the same person,” explained one employee on Reddit. “Like, one guy had 10 pizzas and another person had 8 pizzas. Damn those people that took advantage of the system. But hey one of the free bastards gave me a 20-dollar tip. So I guess worth it, somewhat.”
“My store ended up selling 170 medium pizzas in an hour and a half,” posted another user.
“ONE GUY ALONE placed 24 orders over the next 5 or so days for these free pizzas,” said another staff member.
Quickly, stores were overloaded with free pizza orders as people abused the system and the glitched codes being shared online. While some greedy pizza lovers were trying to walk out of Domino’s with a dozen pizzas or more, others were going a different route and using the broken codes to schedule multiple, free pizza deliveries for weeks.
“I looked in the system last night, like a hundred timed orders stretching out weeks for free pizza,” posted one supposed Domino’s staff member. “Once I saw all the duplicates on the line today, I looked up order history, then searched online, and had to step off to call the district manager.”
Eventually, during the worst of the free pizza apocalypse, things got so bad that managers reportedly began panic-messaging employees at local stores around the country, telling them to stop making free pizzas and to cancel those orders immediately. Some employees, fed up with angry customers coming in and yelling about their free pizzas not being ready, posted how happy they were to cancel all these glitched orders. Other employees claimed that their stores actually honored the deal. Throughout it all, it seemed Domino’s corporate higher-ups didn’t have much support or guidance to provide overworked and frustrated employees. (Based on posts on the Domino’s subreddit, this is common behavior from the national pizza chain.)
One employee told me via Reddit DMs that even at their smaller store they were swamped with free orders, leading to cancellations.
“Anytime an order would come in, we would have to call customers and let them know that we couldn’t do their order,” the Domino’s employee explained. “I think most people knew they shouldn’t have exploited the code, so I personally had nobody too [upset about] their order being canceled.”
As for whether corporate got involved to help, I was told that they likely only talked to district managers, who then spread the news around to others. However, the employee I spoke with made it clear that during the free pizza debacle, staff received no explanation from higher-ups. Instead, employees shared information via the subreddit and group chats. One manager even reportedly pinged an employee asking them for updates on the situation from that subreddit as they knew the staff member was active on the site.
Today, after the parmesan dust has settled, staff seem confused as to why Domino’s even ran a promo like this and why the company didn’t do more when it became clear that a glitch was causing people to walk away with stacks of free pizza pies.
“Domino’s is an awful company that is bad at basically everything,” posted one employee when asked why this promo even happened. “They don’t know how to increase business because they don’t understand what the problems are.”
Meta founder and CEO Mark Zuckerberg revealed on Instagram and Threads (both of which he owns) that he had ACL surgery on his knee last week — and recovery is proving to be a challenge.
Zuck told followers he suffered the injury while practicing MMA in his backyard for an upcoming fight.
“I was training for a competitive MMA fight early next year, but now that’s delayed a bit,” he said. “Still looking forward to doing it after I recover. Thanks to everyone for the love and support.”
On threads, Meta’s competition to Elon Musk’s X, Zuckerberg updated his followers on his progress.
Naturally, the CEO has found himself playing UFC games to fill the void of not being able to fight in real life — except things became “a bit too real” when his character in the game was also, ironically, injured.
“My fighter started 39 years old, but turns out every time you lose your fighter needs 9 months to recover from injuries plus time to get a new fight and then training camp,” he explained. “I chose the hardest difficulty and found myself sitting here at the peak of post-surgery pain with my fighter 0-8, almost 54 years old, still trying to get his first win in the UFC.”
A classic case of life imitating art.
Zuck’s long since been a fan of martial arts (he rented out the entire UFC APEX in Las Vegas for a match in 2022, after all) but has been taking the sport more seriously as of late.
This past May, the billionaire won a gold medal in his first-ever Jiu-Jitsu competition in Woodside, California.
“MMA is the perfect thing,” Zuckerberg told host Joe Rogan on an August 2022 episode of The Joe Rogan Experience. “After an hour or two of working out or rolling or wrestling with friends, or training with different folks, it”s like now I’m ready to go solve whatever problem at work for the day.”
According to Healthline, healing from ACL surgery takes at least nine months, including the initial post-surgery healing phase and physical therapy further down the line.
Image: Dimitrios Kambouris / Sony / Kotaku (Getty Images)
PlayStation 5 is ditching its integration with Twitter, the social media platform recently rebranded as “X” after Elon Musk bought it for $44 billion and then promptly crashed it into a brick wall like a dad coming home from a mid-life crisis bender in his brand-new Ferrari. Nintendo Switch will soon be the only gaming console you can still tweet from.
Thank You, PS Plus, For Making My Backlog Even Bigger
Sony announced the change in a new notification to PS5 users today. “As of November 13, 2023, interaction with X (formerly known as Twitter) will no longer function on PlayStation 5 and PlayStation 4 consoles,” the company wrote. “This includes the ability to view any content published on X on PS5/PS4, and the ability to post and view content, trophies, and other gameplay related activities on X directly from PS5/PS4 (or line an X account to do so).”
Twitter was one of three main social media platforms alongside Facebook and YouTube that the PS4 directly connected to when its new sharing feature first debuted back in 2013. There was an entirely new button on the DualShock 4 dedicated just to capturing images and quickly flinging them across the internet. The ease with which secrets, spoilers, exploits, glitches, and all kinds of other gameplay discoveries could be instantly shared completely changed how people played games and talked about them.
It won’t be impossible to keep sharing game moments to social media when Twitter integration ends later this month, but it’s another reminder that the current internet is dying. YouTube is a pain and Facebook is, well, Facebook. Neither facilitate the constantly updating wire service-like feed Twitter once embodied. The best way to get images of your PS5 and PS4 now is to have them automatically sync with Sony’s dedicated PlayStation app. From there you can repost them to one of Twitter’s many new clones, make a video on TikTok, or send them to your favorite Discord server.
Microsoft bailed on Twitter back in April, shortly after Musk announced he would start charging companies to have access to the platform’s API, the tool needed to make two programs work together. The tech billionaire accused the trillion dollar tech company of stealing Twitter’s idea to train its AI products. In the months since, celebrities, brands, and average users have all continued to abandon the dying platform. It lost roughly 13 percent of its users from a year ago, half its ad revenue, and is now apparently worth over $20 billion less than what Musk originally paid for it.
The Supreme Court heard arguments Tuesday over whether the 1st Amendment helps or hurts public officials who use their personal Facebook pages to communicate with constituents — and sometimes block their critics.
The justices heard an appeal from two San Diego-area school board members who were sued for violating the free-speech rights of a parent. The board members had blocked the parent, Christopher Garnier, from their Facebook and Twitter accounts, saying he had posted dozens of repetitive comments to their personal Twitter and Facebook accounts.
Federal courts in California sided with Garnier and ruled the 1st Amendment barred officials from excluding their critics if the board members used their personal pages for public business.
Three years ago, President Trump suffered a similar defeat when federal courts in New York ruled he violated the 1st Amendment by blocking his critics from his Twitter account. The Supreme Court later dismissed his appeal because he was then out of office.
Now the issue is before the court in the case of Michelle O’Connor-Ratcliff, a school member from the Poway Unified School District, and T.J. Zane, a former member who was also sued.
Their case was paired with one from a city manager in Port Huron, Mich., who won a ruling blocking an online critic.
The legal issue before the high court is whether public officials “engage in state action” when they use their personal pages to communicate with the public.
The 9th Circuit Court in San Francisco ruled the school board members took official action and were bound by the 1st Amendment. “They clothed their pages in the authority of their offices and used their pages to communicate about their official duties,” said Judge Marsha Berzon.
The board members appealed and urged the justices to overturn the 9th Circuit’s ruling, which sets the law for public officials throughout California and the Western states.
They argued they were expressing their personal views on social media, and their Facebook or Twitter accounts did not speak for the school district.
A ruling in favor of Garnier “will have the unintended consequence of creating less speech if the social-media pages of public officials are overrun with harassment, trolling, and hate speech, which officials will be powerless to filter,” they said.
In the Michigan case, by contrast, the 6th Circuit Court ruled the city manager’s Facebook page was his personal account and was not part of his job or official duties.
Usually the 1st Amendment protects the rights of writers or speakers, but in cases such as these, it may give others a right to reply to the speaker.
The pair of cases heard Tuesday present the first of three disputes before the Supreme Court over how the 1st Amendment applies to social media.
The justices will also rule on whether states such as Texas and Florida violate the 1st Amendment if they penalize school media platforms for allegedly discriminating against conservatives. They will also decide whether the Biden administration violated the 1st Amendment by pressing Facebook and other platforms to remove “disinformation” about COVID-19 and vaccines.
Police are searching for the gunman responsible for two shootings in Lewiston, Maine.
According to local law enforcement authorities, at least 22 people were killed and 50 to 60 injured in the gunfire.
The Androscoggin County Sheriff’s Office said the suspect is still at large and posted photos of him on its Facebookpage. Lewiston’sSun Journal said law enforcement agencies were searching for 40-year-old Robert Card as a possible suspect.
“We are encouraging all businesses to lock down and or close while we investigate,” the sheriff’s office said.
The Maine State Police also posted about the shootings on Facebook.
“There is an active shooter situation in the city of Lewiston. Law enforcement is asking people to shelter in place. Please stay inside your home with the doors locked,” the post said. “Law enforcement is currently investigating at two locations right now. Again please stay off the streets and allow law enforcement to diffuse the situation. If you see any suspicious activity or individuals please call 911. Updates to follow.”
Newsweek reached out to the Androscoggin County Sheriff’s Office and the Maine State Police for more details.
This image shows the man sought by police after two shootings in Lewiston, Maine, in which at least 16 people were killed. Androscoggin County Sheriff’s Office via Facebook
“I am aware of and have been briefed on the active shooter situation in Lewiston. I urge all people in the area to follow the direction of State and local law enforcement,” Mills said. “I will to continue to monitor the situation and remain in close contact with public safety officials.”
Other Maine politicians have released statements about the shootings in Lewiston, and the White House announced President Joe Biden has been briefed.
Earlier on Wednesday, Lewiston police wrote on Facebook that they were dealing with an active shooter incident at Schemengees Bar and Grille and at a bowling alley named Sparetime Recreation.
The photos released by Androscoggin County Sheriff’s Office appear to show the suspect leaving the bowling alley while carrying a rifle.
The Lewiston Police Department released a photograph Wednesday night of a white Subaru Outback that officers said may have a bumper painted black.
Lewiston’s Central Maine Medical Center also reported it was dealing with a “mass casualty incident.”
“Central Maine Medical Center is reacting to a mass casualty, mass shooter event. At this time there are no specifics to share on the number of casualties,” a statement from the hospital said. “Central Maine Healthcare is coordinating with area hospitals to take in patients.”
Update: 10/25/23, 11:01 p.m.: This article was updated with further information and background.
Uncommon Knowledge
Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.
Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.
Meta (META) reported its third quarter earnings on Wednesday, beating on the top and bottom lines.
The company’s shares rose as much as 4% in after-hours trading, as it continues to rebound from a lackluster 2022.
Meta’s been navigating rough waters, steadying itself as an AI-powered advertising giant and working through its capital-intensive expansion into VR and AR. The Facebook and Instagram parent has been in the process of shoring up two key areas of interest for investors — its AI efforts and its position in the digital advertising market, which has been in a prolonged slump and is just showing signs of a rebound.
Meta’s Q3 advertising revenue came in at $33.64 billion, compared to the expected $32.94 billion. The company beat on ad impressions estimates, clocking an increase of 31% year over year, versus the expected 29.6%.
Meta shares have risen more than 140% year to date, massively outperforming both the S&P 500 and the Nasdaq Internet Index, which are up around 9% and 34% this year, respectively.
“The stock has done well this year,” Neuberger Bergman analyst Daniel Flax told Yahoo Finance Live on Wednesday. “[If they can] drive durable growth and translate that into earnings per share and free cash flow generation, I think the stock can continue to work its way higher.”
Meta’s near future could be mired in legal risks, as the company is staring down federal and state lawsuits from 42 attorneys general, who are alleging that Facebook and Instagram’s features geared toward children are addictive.
“We’re disappointed that instead of working productively with companies across the industry to create clear, age-appropriate standards for the many apps teens use, the attorneys general have chosen this path,” a Meta spokesperson said in a statement.
Currently, Wall Street analysts’ recommendations for Meta break down to 60 Buys, seven Holds, and two Sells.
The earnings rundown
Here are the key numbers that Meta reported, as compared to analysts’ estimates compiled by Bloomberg:
Revenue: $34.15 billion actual, up 23% year-over-year, versus $33.52 billion expected
Earnings per share: $4.39 actual, up 168% year-over-year, versus $3.60 expected
Facebook daily active users: 2.09 billion actual, versus 2.07 billion expected
Zuckerberg’s “Year of Efficiency” initiatives seem to be paying off, as the company is decreasing its 2023 capital expenditures outlook. It’s revising the range to be between $27 billion and $29 billion, a decline from the previously announced $27 billion to $30 billion.
Meta’s Family of Apps business, which also includes WhatsApp, raked in over $33 billion in revenue. The division’s operating income was $17.49 billion for the quarter, handily beating analysts’ expectation of $15.23 billion.
But Reality Labs, the company’s mixed reality business, has been a subject of controversy. Since 2022, Meta has lost more than $20 billion running Reality Labs; $13.7 billion of that came from last year.
The company said it expects these losses to continue, and will increase notably year over year in 2023. Meta recently launched its Quest 3 headset, priced at $499.99.
“We had a good quarter for our community and business,” Meta CEO Mark Zuckerberg said in a statement. “I’m proud of the work our teams have done to advance AI and mixed reality with the launch of Quest 3, Ray-Ban Meta smart glasses, and our AI studio.”
Lawsuits brought by 41 state attorneys general accuse Meta, the company that owns Facebook and Instagram, of designing apps that were addictive to children. Jo Ling Kent reports.
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.
A group of 41 attorneys general from dozens of states are filing lawsuits claiming Meta Platforms Inc. built addictive features in its Facebook and Instagram services that harm children.
The lawsuits in federal and state courts allege Meta META, -0.47%
knowingly marketed its products to users under the age of 13, who are barred from the platform by both Meta’s policies and federal law. The states are seeking to force Meta to change product features that they say pose dangers to young users.
The lawsuit, filed Tuesday in federal court in Northern California, claims Meta, “has harnessed powerful and unprecedented technologies to entice, engage, and ultimately ensnare youth and teens.” Meta has “profoundly altered the psychological and social realities of a generation of young Americans,” the suit also said.
The lawsuit also accuses Meta of violating the law by collecting data on users under 13 without parental consent. California Attorney General Rob Bonta said the suit was the result of a multiyear investigation.
Meta said it was “disappointed” with the legal action.
“We share the attorneys general’s commitment to providing teens with safe, positive experiences online, and have already introduced over 30 tools to support teens and their families,” a Meta spokesman said in an email. “We’re disappointed that instead of working productively with companies across the industry, the attorneys general have chosen this path.”
Meta’s stock was flat in late-afternoon trading Tuesday.
Meta is considering charging European users for versions of its Instagram and Facebook apps, which are currently free, to comply with European Union regulations.
The technology company has proposed charging Instagram and Facebook users in Europe about $13 a month to avoid seeing ads, a source told CBS MoneyWatch. That’s roughly what competitors such as YouTube Premium charge for accounts in Europe. The Wall Street Journal first reported on Meta’s plan.
Meta is required to comply with European Union privacy rules that restrict its ability to target users with personalized ads based on their online browsing activity. Facebook and Instagram, which are free, are largely supposed by advertising. Ireland’s Data Privacy Commissioner previously fined the company for requiring app users to consent to viewing ads based on their online activity.
The new proposal would offer European users two choices: continue using free versions of Instagram and Facebook with personalized ads, or pay for ad-free subscriptions. The changes would not affect Meta app users in other countries, including the U.S.
A source familiar with the matter told CBS MoneyWatch that Meta’s proposal is not set in stone and it continues to explore a range of options to comply with the EU regulations.
“Meta believes in the value of free services which are supported by personalized ads. However, we continue to explore options to ensure we comply with evolving regulatory requirements,” a Meta spokesperson said in a statement to CBS MoneyWatch.
Thanks for reading CBS NEWS.
Create your free account or log in for more features.
WhatsApp said on Wednesday that it will offer credit card payments and services from rival digital payment providers within its app in India, the latest bet by the Meta-owned service to boost commerce offerings in its biggest market.
WhatsApp has more than 500 million users in India, though regulators there have capped its in-app WhatsApp Pay service to only 100 million people.
People shopping on WhatsApp could also pay using popular services like Alphabet Inc’s Google Pay, Paytm and Walmart’s PhonePe but only after being redirected outside WhatsApp.
Payments via those rival services -— and any others that run on India’s instant money transfer system UPI — will now be possible directly within WhatsApp, Meta said in a blog post. New in-app options for credit and debit cards will also be offered.
The additions bolster Meta CEO Mark Zuckerberg’s plan for business messaging to become the “next major pillar” of the company’s sales growth, an agenda that has assumed greater urgency as Meta’s core ads business and metaverse project have come under pressure.
While WhatsApp Pay users will remain capped in India, there is no such limit on the number of users permitted to transact with businesses on WhatsApp using the other methods, a Meta spokesperson said.
With some 300 million people spending about $180 billion via India’s UPI each month, the new transaction options could serve as a powerful lure to attract businesses to pay Meta for access to WhatsApp users.
To date, WhatsApp has limited its end-to-end shopping experiences in India to pilot programs like that with online grocery service JioMart, run by India’s richest person, billionaire Mukesh Ambani, and the metro systems in the cities of Chennai and Bengaluru.
Moving forward, the new payment tools will be available to any company in India that uses WhatsApp’s business platform, which mainly serves large companies, according to the blog post.
Meta is also expanding its Meta Verified subscription program to businesses globally, giving companies a mechanism to validate authenticity and elevate their content in users’ feeds, a separate blog post said.
Monthly subscriptions will be available on Instagram and Facebook in a handful of countries to start and will expand to WhatsApp at a later date, costing $21.99 per Facebook page or Instagram account or $34.99 for both, according to the post.
The nation’s biggest technology executives on Wednesday loosely endorsed the idea of government regulations for artificial intelligence at an unusual closed-door meeting in the U.S. Senate. But there is little consensus on what regulation would look like, and the political path for legislation is difficult.
Executives attending the meeting included Tesla CEO Elon Musk, Meta’s Mark Zuckerberg, former Microsoft CEO Bill Gates and Google CEO Sundar Pichai. Musk said the meeting “might go down in history as being very important for the future of civilization.”
First, though, lawmakers have to agree on whether to regulate, and how.
Senate Majority Leader Chuck Schumer, who organized the private forum on Capitol Hill as part of a push to legislate artificial intelligence, said he asked everyone in the room — including almost two dozen tech executives, advocates and skeptics — whether government should have a role in the oversight of artificial intelligence, and “every single person raised their hands, even though they had diverse views,” he said.
Elon Musk departs following a meeting with U.S. senators about the future of artificial intelligence on Capitol Hill in Washington, D.C., on Sept. 13, 2023.
STEFANI REYNOLDS/AFP via Getty Images
Among the ideas discussed was whether there should be an independent agency to oversee certain aspects of the rapidly developing technology, how companies could be more transparent and how the U.S. can stay ahead of China and other countries.
“The key point was really that it’s important for us to have a referee,” said Musk during a break in the daylong forum. “It was a very civilized discussion, actually, among some of the smartest people in the world.”
Schumer will not necessarily take the tech executives’ advice as he works with colleagues on the politically difficult task of ensuring some oversight of the burgeoning sector. But he invited them to the meeting in hopes that they would give senators some realistic direction for meaningful regulation.
Congress should do what it can to maximize AI’s benefits and minimize the negatives, Schumer said, “whether that’s enshrining bias, or the loss of jobs, or even the kind of doomsday scenarios that were mentioned in the room. And only government can be there to put in guardrails.”
Congress has a lackluster track record when it comes to regulating new technology, and the industry has grown mostly unchecked by government in the past several decades. Many lawmakers point to the failure to pass any legislation surrounding social media, such as for stricter privacy standards.
Schumer, who has made AI one of his top issues as leader, said regulation of artificial intelligence will be “one of the most difficult issues we can ever take on,” and he listed some of the reasons why: It’s technically complicated, it keeps changing and it “has such a wide, broad effect across the whole world,” he said.
Sparked by the release of ChatGPT less than a year ago, businesses have been clamoring to apply new generative AI tools that can compose human-like passages of text, program computer code and create novel images, audio and video. The hype over such tools has accelerated worries over its potential societal harms and prompted calls for more transparency in how the data behind the new products is collected and used.
Republican Sen. Mike Rounds of South Dakota, who led the meeting with Schumer, said Congress needs to get ahead of fast-moving AI by making sure it continues to develop “on the positive side” while also taking care of potential issues surrounding data transparency and privacy.
“AI is not going away, and it can do some really good things or it can be a real challenge,” Rounds said.
The tech leaders and others outlined their views at the meeting, with each participant getting three minutes to speak on a topic of their choosing. Schumer and Rounds then led a group discussion.
During the discussion, according to attendees who spoke about it, Musk and former Google CEO Eric Schmidt raised existential risks posed by AI, and Zuckerberg brought up the question of closed vs. “open source” AI models. Gates talked about feeding the hungry. IBM CEO Arvind Krishna expressed opposition to proposals favored by other companies that would require licenses.
In terms of a potential new agency for regulation, “that is one of the biggest questions we have to answer and that we will continue to discuss,” Schumer said. Musk said afterward he thinks the creation of a regulatory agency is likely.
Outside the meeting, Google CEO Pichai declined to give details about specifics but generally endorsed the idea of Washington involvement.
“I think it’s important that government plays a role, both on the innovation side and building the right safeguards, and I thought it was a productive discussion,” he said.
Some senators were critical that the public was shut out of the meeting, arguing that the tech executives should testify in public.
Republican Sen. Josh Hawley of Missouri said he would not attend what he said was a “giant cocktail party for big tech.” Hawley has introduced legislation with Democratic Sen. Richard Blumenthal of Connecticut to require tech companies to seek licenses for high-risk AI systems.
“I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money and then close it to the public,” Hawley said.
While civil rights and labor groups were also represented at the meeting, some experts worried that Schumer’s event risked emphasizing the concerns of big firms over everyone else.
Sarah Myers West, managing director of the nonprofit AI Now Institute, estimated that the combined net worth of the room Wednesday was $550 billion and it was “hard to envision a room like that in any way meaningfully representing the interests of the broader public.” She did not attend.
In the U.S., major tech companies have expressed support for AI regulations, though they don’t necessarily agree on what that means. Similarly, members of Congress agree that legislation is needed, but there is little consensus on what to do.
Some concrete proposals have already been introduced, including legislation by Sen. Amy Klobuchar, D-Minn., that would require disclaimers for AI-generated election ads with deceptive imagery and sounds. Schumer said they discussed “the need to do something fairly immediate” before next year’s presidential election.
Hawley and Blumenthal’s broader approach would create a government oversight authority with the power to audit certain AI systems for harms before granting a license.
Some of those invited to Capitol Hill, such as Musk, have voiced dire concerns evoking popular science fiction about the possibility of humanity losing control to advanced AI systems if the right safeguards are not in place. But the only academic invited to the forum, Deborah Raji, a University of California, Berkeley researcher who has studied algorithmic bias, said she tried to emphasize real-world harms already occurring.
“There was a lot of care to make sure the room was a balanced conversation, or as balanced as it could be,” Raji said.
What remains to be seen, she said, is which voices senators will listen to and what priorities they elevate as they work to pass new laws.
Some Republicans have been wary of following the path of the European Union, which signed off in June on the world’s first set of comprehensive rules for artificial intelligence. The EU’s AI Act will govern any product or service that uses an AI system and classify them according to four levels of risk, from minimal to unacceptable.
A group of European corporations has called on EU leaders to rethink the rules, arguing that it could make it harder for companies in the 27-nation bloc to compete with rivals overseas in the use of generative AI.