ReportWire

Tag: iab-internet

  • Justin Trudeau blasts Facebook for blocking news as Canada’s wildfires rage | CNN Business

    Justin Trudeau blasts Facebook for blocking news as Canada’s wildfires rage | CNN Business



    CNN
     — 

    Canadian Prime Minister Justin Trudeau blasted Facebook for “putting corporate profits ahead of people’s safety” as the social media platform continues to block news content while wildfires rage in Canada’s Northwest Territories and British Columbia.

    “It is so inconceivable that a company like Facebook is choosing to put corporate profits ahead of ensuring that local news organizations can get up-to-date information to Canadians, and reach them where Canadians spend a lot of their time; online, on social media, on Facebook,” Trudeau said during a news conference Monday.

    Some 60,000 people across the Northwest Territories and British Columbia have been placed under evacuation orders since this weekend, according to the most recent numbers from Canadian officials. Also on Monday, Trudeau described the devastation wrought by the wildfires as “apocalyptic” and praised Canadians for stepping up to support evacuees.

    Earlier this month, Facebook’s parent-company Meta began to block news links from Facebook and Instagram in Canada, in response to recently-passed legislation in the country that requires tech companies to negotiate payments to news organizations for hosting their content.

    A Meta spokesperson told CNN in a statement on Monday that Canadians “continue to use our technologies in large numbers to connect with their communities and access reputable information, including content from official government agencies, emergency services and non-governmental organizations.”

    The new legislation in Canada “forces us to end access to news content in order to comply with the legislation but we remain focused on making our technologies available,” the statement added, pointing to Meta’s Safety Check tool, which the company said more than 45,000 people had used as of Friday to mark themselves as safe.

    The Meta spokesperson added that 300,000 people have visited the Yellowknife and Kelowna Crisis Response pages on Facebook.

    The Canadian legislation, known as Bill C-18 or the Online News Act, was given final approval in June. It aims to support the sustainability of news organizations by regulating “digital news intermediaries with a view to enhancing fairness in the Canadian digital news marketplace.”

    Meta has previously stated, via a company blogpost, that the legislation “misrepresents the value news outlets receive when choosing to use our platforms.” The ongoing controversy in Canada comes amid a global debate over the relationship between news organizations and social media companies about the value of news content, and who gets to benefit from it.

    During his remarks Monday, Trudeau said Facebook’s move to block news content is “bad for democracy” in the long run. “But right now, in an emergency situation, where up-to-date local information is more important than ever, Facebook’s putting corporate profits ahead of people’s safety,” Trudeau said.

    CNN’s Brian Fung contributed to this report.

    Source link

  • Elon Musk should be forced to testify on X’s ‘chaotic environment,’ US regulator tells court | CNN Business

    Elon Musk should be forced to testify on X’s ‘chaotic environment,’ US regulator tells court | CNN Business


    Washington
    CNN
     — 

    Elon Musk should be forced to testify in an expansive US government probe of X, the company formerly known as Twitter, the US government said.

    The government said mass layoffs and other decisions Musk made raised questions about X’s ability to comply with the law and to protect users’ privacy.

    The US government’s attempt to compel Musk’s testimony is the latest turn in an investigation that predates Musk’s acquisition of X that has intensified due to Musk’s own actions, according to a court filing by the Justice Department on behalf of the Federal Trade Commission.

    The court filing dated Monday cites depositions with multiple former X executives, including its former chief information security officer and former chief privacy officer, who testified that a barrage of layoffs and resignations following Musk’s $44 billion takeover may have hindered X from meeting its security obligations under a 2011 FTC consent agreement.

    Twitter and its outside attorney didn’t immediately respond to a request for comment.

    According to testimony cited in the filing, there were so few employees left after the departures that anywhere from 37% to 50% of the company’s security program lacked effective management and oversight, with no one available to take responsibility for those controls. Other planned upgrades to the company’s security program were “impaired,” the filing said, citing a deposition by the former chief information security officer, Lea Kissner.

    In another example, Musk personally tried to rush the rollout of Twitter Blue, the company’s paid subscription service, the filing said. That forced the company’s security team to bypass the required security and privacy checks that were a part of Twitter’s own policies and that had been mandated in the FTC order, according to the testimony of Damien Kieran, the former chief privacy officer.

    The filing also alleges that Musk’s move to grant several journalists access to internal company records — access that would culminate in the so-called Twitter Files claiming to show evidence of politically motivated censorship — initially involved a plan that could potentially have led to the exposure of private user data in violation of the FTC order.

    According to the filing, Musk’s plan originally called for providing access through a dedicated company laptop with “elevated privileges beyond just what a[n] average employee might have.”

    “Longtime information security employees intervened and implemented safeguards to mitigate the risks,” the filing said, but even then, the former employees testified, the process raised doubts about Musk’s commitment to privacy and security.

    X has moved to block Musk from being forced to testify and has asked a federal court to invalidate the entire FTC order requiring it to safeguard user privacy, accusing the FTC of asking too many questions in its probe.

    But in its filing, the US government said its interest in Musk’s testimony is well-justified based on the appearance of a “chaotic environment” at X driven by “sudden, radical changes at the company” following Musk’s acquisition.

    “The FTC had every reason to seek information about whether these developments signaled a lapse in X Corp.’s compliance” with the 2011 order, the filing said. Confirmed violations of the FTC order could lead to billions of dollars in fines for X, as well as potential legal ramifications for individual executives such as Musk if they are deemed personally responsible for them.

    The FTC investigation traces back to bombshell allegations — raised by Twitter’s former security chief Peiter “Mudge” Zatko and predating Musk’s acquisition — that for years Twitter has failed to live up to its legally binding commitments to the FTC to protect user privacy and security. Those allegations were first reported last year by CNN and The Washington Post.

    The investigation has proven politically charged as Musk — and his allies including Republicans on the House Judiciary Committee — have responded to the probe by publicly accusing the FTC of harassment and overreach.

    Source link

  • Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business


    Washington
    CNN
     — 

    Coming out of a three-hour Senate hearing on artificial intelligence, Elon Musk, the head of a handful of tech companies, summarized the grave risks of AI.

    “There’s some chance – above zero – that AI will kill us all. I think it’s low but there’s some chance,” Musk told reporters. “The consequences of getting AI wrong are severe.”

    But he also said the meeting “may go down in history as being very important for the future of civilization.”

    The session organized by Senate Majority Leader Chuck Schumer brought high-profile tech CEOs, civil society leaders and more than 60 senators together. The first of nine sessions aims to develop consensus as the Senate prepares to draft legislation to regulate the fast-moving artificial intelligence industry. The group included CEOs of Meta, Google, OpenAI, Nvidia and IBM.

    All the attendees raised their hands — indicating “yes” — when asked whether the federal government should oversee AI, Schumer told reporters Wednesday afternoon. But consensus on what that role should be and specifics on legislation remained elusive, according to attendees. 

    Benefits and risks

    Bill Gates spoke of AI’s potential to feed the hungry and one unnamed attendee called for spending tens of billions on “transformational innovation” that could unlock AI’s benefits, Schumer said.

    The challenge for Congress is to promote those benefits while mitigating the societal risks of AI, which include the potential for technology-based discrimination, threats to national security and even, as X owner Musk said, “civilizational risk.”

    “You want to be able to maximize the benefits and minimize the harm,” said Schumer, who organized the first of nine sessions. “And that will be our difficult job.”

    Senators emerging from the meeting said they heard a broad range of perspectives, with representatives from labor unions raising the issue of job displacement and civil rights leaders highlighting the need for an inclusive legislative process that provides the least powerful in society a voice.

    Most agreed that AI could not be left to its own devices, said Washington Democratic Sen. Maria Cantwell.

    “I thought Satya Nadella from Microsoft said it best: ‘When it comes to AI, we shouldn’t be thinking about autopilot. You need to have copilots.’ So who’s going to be watching this activity and making sure that it’s done correctly?”

    Other areas of agreement reflected traditional tech industry priorities, such as increasing federal investment in research and development as well as promoting skilled immigration and education, Cantwell added.

    But there was a noticeable lack of engagement on some of the harder questions, she said, particularly on whether a new federal agency is needed to regulate AI.

    “There was no discussion of that,” she said, though several in the meeting raised the possibility of assigning some greater oversight responsibilities to the National Institute of Standards and Technology, a Commerce Department agency.

    Musk told journalists after the event that he thinks a standalone agency to regulate AI is likely at some point.

    “With AI we can’t be like ostriches sticking our heads in the sand,” Schumer said, according to prepared remarks acquired by CNN. He also noted this is “a conversation never before seen in Congress.”

    The push reflects policymakers’ growing awareness of how artificial intelligence, and particularly the type of generative AI popularized by tools such as ChatGPT, could potentially disrupt business and everyday life in numerous ways — ranging from increasing commercial productivity to threatening jobs, national security and intellectual property.

    The high-profile guests trickled in shortly before 10 a.m., with Meta CEO Mark Zuckerberg pausing to chat with Nvidia CEO Jensen Huang outside the Senate Russell office building’s Kennedy Caucus Room. Google CEO Sundar Pichai was seen huddling with Delaware Democratic Sen. Chris Coons, while X owner Musk quickly swept by a mass of cameras with a quick wave to the crowd. Inside, Musk was seated at the opposite end of the room from Zuckerberg, in what is likely the first time that the two men have shared a room since they began challenging each other to a cage fight months ago.

    Elon Musk, CEO of X, the company formerly known as Twitter, left, and Alex Karp, CEO of the software firm Palantir Technologies, take their seats as Senate Majority Leader Chuck Schumer, D, N.Y., convenes a closed-door gathering of leading tech CEOs to discuss the priorities and risks surrounding artificial intelligence and how it should be regulated, at the Capitol in Washington, Wednesday, Sept. 13, 2023.

    The session at the US Capitol in Washington also gave the tech industry its most significant opportunity yet to influence how lawmakers design the rules that could govern AI.

    Some companies, including Google, IBM, Microsoft and OpenAI, have already offered their own in-depth proposals in white papers and blog posts that describe layers of oversight, testing and transparency.

    IBM’s CEO, Arvind Krishna, argued in the meeting that US policy should regulate risky uses of AI, as opposed to just the algorithms themselves.

    “Regulation must account for the context in which AI is deployed,” he said, according to his prepared remarks.

    Executives such as OpenAI CEO Sam Altman previously wowed some senators by publicly calling for new rules early in the industry’s lifecycle, which some lawmakers see as a welcome contrast to the social media industry that has resisted regulation.

    Clement Delangue, co-founder and CEO of the AI company Hugging Face, tweeted last month that Schumer’s guest list “might not be the most representative and inclusive,” but that he would try “to share insights from a broad range of community members, especially on topics of openness, transparency, inclusiveness and distribution of power.”

    Civil society groups have voiced concerns about AI’s possible dangers, such as the risk that poorly trained algorithms may inadvertently discriminate against minorities, or that they could ingest the copyrighted works of writers and artists without compensation or permission. Some authors have sued OpenAI over those claims, while others have asked in an open letter to be paid by AI companies.

    News publishers such as CNN, The New York Times and Disney are some of the content producers who have blocked ChatGPT from using their content. (OpenAI has said exemptions such as fair use apply to its training of large language models.)

    “We will push hard to make sure it’s a truly democratic process with full voice and transparency and accountability and balance,” said Maya Wiley, president and CEO of the Leadership Conference on Civil and Human Rights, “and that we get to something that actually supports democracy; supports economic mobility; supports education; and innovates in all the best ways and ensures that this protects consumers and people at the front end — and just not try to fix it after they’ve been harmed.”

    The concerns reflect what Wiley described as “a fundamental disagreement” with tech companies over how social media platforms handle misinformation, disinformation and speech that is either hateful or incites violence.

    American Federation of Teachers President Randi Weingarten said America can’t make the same mistake with AI that it did with social media. “We failed to act after social media’s damaging impact on kids’ mental health became clear,” she said in a statement. “AI needs to supplement, not supplant, educators, and special care must be taken to prevent harm to students.”

    Navigating those diverse interests will be Schumer, who along with three other senators — South Dakota Republican Sen. Mike Rounds, New Mexico Democratic Sen. Martin Heinrich and Indiana Republican Sen. Todd Young — is leading the Senate’s approach to AI. Earlier this summer, Schumer held three informational sessions for senators to get up to speed on the technology, including one classified briefing featuring presentations by US national security officials.

    Wednesday’s meeting with tech executives and nonprofits marked the next stage of lawmakers’ education on the issue before they get to work developing policy proposals. In announcing the series in June, Schumer emphasized the need for a careful, deliberate approach and acknowledged that “in many ways, we’re starting from scratch.”

    “AI is unlike anything Congress has dealt with before,” he said, noting the topic is different from labor, healthcare or defense. “Experts aren’t even sure which questions policymakers should be asking.”

    Rounds said hammering out the specific scope of regulations will fall to Senate committees. Schumer added that the goal — after hosting more sessions — is to craft legislation over “months, not years.”

    “We’re not ready to write the regs today. We’re not there,” Rounds said. “That’s what this is all about.”

    A smattering of AI bills have already emerged on Capitol Hill and seek to rein in the industry in various ways, but Schumer’s push represents a higher-level effort to coordinate Congress’s legislative agenda on the issue.

    New AI legislation could also serve as a potential backstop to voluntary commitments that some AI companies made to the Biden administration earlier this year to ensure their AI models undergo outside testing before they are released to the public.

    But even as US lawmakers prepare to legislate by meeting with industry and civil society groups, they are already months if not years behind the European Union, which is expected to finalize a sweeping AI law by year’s end that could ban the use of AI for predictive policing and restrict how it can be used in other contexts.

    A bipartisan pair of US senators sharply criticized the meeting, saying the process is unlikely to produce results and does not do enough to address the societal risks of AI.

    Connecticut Democratic Sen. Richard Blumenthal and Missouri Republican Sen. Josh Hawley each spoke to reporters on the sidelines of the meeting. The two lawmakers recently introduced a legislative framework for artificial intelligence that they said represents a concrete effort to regulate AI — in contrast to what was happening steps away behind closed doors.

    “This forum is not designed to produce legislation,” Blumenthal said. “Our subcommittee will produce legislation.”

    Blumenthal added that the proposed framework — which calls for setting up a new independent AI oversight body, as well as a licensing regime for AI development and the ability for people to sue companies over AI-driven harms — could lead to a draft bill by the end of the year.

    “We need to do what has been done for airline safety, car safety, drug safety, medical device safety,” Blumenthal said. “AI safety is no different — in fact, potentially even more dangerous.”

    Hawley called Wednesday’s sessions “a giant cocktail party” for the tech industry and slammed the fact that it was private.

    “I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money, and then close it to the public,” Hawley said. “I mean, that’s a terrible idea. These are the same people who have ruined social media.”

    Despite talking tough on tech, Schumer has moved extremely slowly on tech legislation, Hawley said, pointing to several major tech bills from the last Congress that never made it to a Senate floor vote.

    “It’s a little bit like antitrust the last two years,” Hawley said. “He talks about it constantly and does nothing about it. My sense is … this is a lot of song and dance that covers the fact that actually nothing is advancing. I hope I’m wrong about that.”

    Hawley is also a co-sponsor of a bill introduced Tuesday led by Minnesota Democratic Sen. Amy Klobuchar that would prohibit generative AI from being used to create deceptive political ads. Klobuchar and Hawley, along with fellow co-sponsors Coons and Maine Republican Sen. Susan Collins, said the measure is needed to keep AI from manipulating voters.

    Massachusetts Democratic Sen. Elizabeth Warren said the broad nature of the summit limited its potential.

    “They’re sitting at a big, round table all by themselves,” Warren said of the executives and civil society leaders, while all the senators sat, listened and didn’t ask questions. “Let’s put something real on the table instead of everybody agree[ing] that we need safety and innovation.”

    Schumer said that making the meeting confidential was intended to give lawmakers the chance to hear from the outside in an “unvarnished way.”

    Source link

  • EU launches probe into disinformation campaigns as X says ‘hundreds’ of Hamas-affiliated accounts removed | CNN Business

    EU launches probe into disinformation campaigns as X says ‘hundreds’ of Hamas-affiliated accounts removed | CNN Business


    London
    CNN
     — 

    X says it has removed “hundreds of Hamas-affiliated accounts” and taken down thousands of posts since the attack on Israel by the Palestinian militant group, even as the European Commission formally opened an investigation into X after a previous warning about disinformation and illegal content on its platform linked to the Israel-Hamas war.

    The platform, formerly known as Twitter, was given 24 hours by the European Union earlier this week to address illegal content and disinformation regarding the conflict or face penalties under the bloc’s recently enacted Digital Services Act.

    CEO Linda Yaccarino responded to EU official Thierry Breton in a letter dated Wednesday that she posted to X. She said the company had “redistributed resources and refocused internal teams who are working around the clock to address this rapidly evolving situation.”

    “There is no place on X for terrorist organizations or violent extremist groups and we continue to remove such accounts in real time,” Yaccarino wrote.

    “X is… addressing identified fake and manipulated content during this constantly evolving and shifting crisis,” she added. The platform had “assembled a leadership group to assess the situation” shortly after news broke about the attack, Yaccarino said.

    European Union officials are now assessing X’s compliance with the DSA and have asked the company to start responding to investigators by as early as Oct. 18.

    The probe covers X’s “policies and practices regarding notices on illegal content, complaint handling, risk assessment and measures to mitigate the risks identified,” the Commission said in a release.

    “X is required to comply with the full set of provisions introduced by the DSA since late August 2023,” the release added, “including the assessment and mitigation of risks related to the dissemination of illegal content, disinformation, gender-based violence, and any negative effects on the exercise of fundamental rights, rights of the child, public security and mental well-being.”

    X didn’t immediately respond to a request for comment. Beyond X, European officials have sent similar warnings to Meta and TikTok in recent days.

    The announcement did not name the Israel-Hamas war. But this week, EU officials sent a letter to X owner Elon Musk warning that if an investigation finds that the company had failed to meet its legal obligations in connection with content about the war, it could face steep penalties, including billions in fines.

    A slew of mischaracterized videos and other posts went viral on X over the weekend, alarming experts who track the spread of misinformation and offering the latest example of social media platforms’ struggle to deal with a flood of falsehoods during a major geopolitical event.

    Since the attack on Israel, Yaccarino said X had acted to “remove or label tens of thousands of pieces of content” that break its rules on violent speech, manipulated media and graphic media. It had also responded to more than 80 “take down requests” from EU authorities to remove content.

    “Community Notes” — which allow X users to fact check false posts — are visible on “thousands of posts, generating millions of impressions,” she wrote.

    According to Yaccarino, notes related to the conflict take about five hours on average to show up after a post is created, a revelation that could fuel concerns that fake or manipulated content is being seen by thousands — or in some cases millions — of people before being moderated.

    The DSA is one of the most ambitious efforts by policymakers anywhere to regulate tech giants and companies face billions in fines for violating the act.

    Source link

  • Meta’s Threads is finally available on desktop | CNN Business

    Meta’s Threads is finally available on desktop | CNN Business


    New York
    CNN
     — 

    Threads users, rejoice: the app is rolling out its highly anticipated web version Tuesday.

    The update — perhaps the most requested by users since Threads’ mobile-only launch last month — puts the new platform one step closer to recreating the functions offered by rival X, the platform formerly known as Twitter, and could help reignite user growth following a sluggish period.

    Parent company Meta says Threads users will soon be able to log in, post, view and interact with other posts via a browser on a desktop computer, as the web version rolls out to users in the coming days. The company says it plans to add more desktop features in the future. In an early access test of some of the web-based features, CNN was able to post on the platform but could not yet scroll the home feed.

    Threads launched in early July with stunning success, garnering more than 100 million sign-ups in its first week on the back of months of chaos at Twitter. But the buzz faded somewhat as users realized the bare-bones platform still lacked many of the features that made Twitter popular, such as trending topics, robust search functions and direct messaging. Threads has been steadily rolling out smaller updates but the hotly demanded web version could help reignite stronger user engagement.

    The new web version could also raise fresh competitive concerns for X, after owner Elon Musk sparked user backlash last week by suggesting he might do away with the platform’s block feature.

    Meta employees have for weeks teased that a desktop version of Threads was in the works and being tested internally. Just last week, Instagram head Adam Mosseri, who is also leading Threads, said he had been posting from the platform’s desktop version and suggested “it’ll be ready soon but it needs more work.”

    Web access is just one of a series of recent updates to Threads as Meta continues to build out the new platform. Other features added over the past month include new “reposts” and “likes” tabs that show users the posts they have reshared and liked in their profiles, a chronological following feed and a button to share threads posts to Instagram DMs.

    Continued updates to Threads are essential if Meta wants to maintain the early traction it had with users. Despite the app’s stunning success following its launch, by the end of July, Threads’ daily active user count had fallen 82% to around 8 million users, according to a report from market research firm Sensor Tower earlier this month. By August 16, updates to Threads had helped the app notch slight gains to 11 million daily active users, Sensor Tower said in a report Monday.

    Meta CEO Mark Zuckerberg has said he is “quite optimistic” about the app’s potential.

    “We saw unprecedented growth out of the gate and more importantly we’re seeing more people coming back daily than I’d expected,” he said last month during the company’s earnings call. “And now, we’re focused on retention and improving the basics. And then after that, we’ll focus on growing the community to the scale we think is possible.”

    Source link

  • How to block graphic social media posts on your kids’ phones | CNN Business

    How to block graphic social media posts on your kids’ phones | CNN Business


    New York
    CNN
     — 

    Many schools, psychologists and safety groups are urging parents to disable their children’s social media apps over mounting concerns that Hamas plans to disseminate graphic videos of hostages captured in the Israel-Gaza war.

    Disabling an app or implementing restrictions, such as filtering out certain words and phrases, on young users’ phones may be sound like a daunting process. But platforms and mobile operating systems offer safeguards that could go along way in protecting a child’s mental health.

    Following the attacks on Israel last weekend, much of the terror has played out on social media. Videos of hostages taken on the streets and civilians left wounded continue to circulate on varying platforms. Although some companies have pledged to restrict sensitive videos, many are still being shared online.

    That can be particularly stressful for minors. The American Psychological Association recently issued a warning about the psychological impacts of the ongoing violence in Israel and Gaza, and other research has linked exposure to violence on social media and in the news as a “cycle of harm to mental health.”

    Alexandra Hamlet, a clinical psychologist in New York City, told CNN people who are caught off guard by seeing certain upsetting content are more likely to feel worse than individuals who choose to engage with content that could be upsetting to them. That’s particularly true for children, she said.

    “They are less likely to have the emotional control to turn off content that they find triggering than the average adult, their insight and emotional intelligence capacity to make sense of what they are seeing is not fully formed, and their communication skills to express what they have seen and how to make sense of it is limited comparative to adults,” Hamlet said.

    If deleting an app isn’t an option, here are other ways to restrict or closely monitor a child’s social media use:

    Parents can start by visiting the parental control features found on their child phone’s mobile operating system. iOS’ Screen Time tool and Android’s Google Family Link app help parents manage a child’s phone activity and can restrict access to certain apps. From there, various controls can be selected, such as restricting app access or flagging inappropriate content.

    Guardians can also set up guardrails directly within social media apps.

    TikTok: TikTok, for example, offers a Family Pairing feature that allows parents and guardians to link their own TikTok account to their child’s account and restrict their ability to search for content, limit content that may not be appropriate for them or filter out videos with words or hashtags from showing up in feeds. These features can also be enabled within the settings of the app, without needing to sync up a guardian’s account.

    Facebook, Instagram and Threads: Meta, which owns Facebook, Instagram and threads, has an educational hub for parents with resources, tips and articles from experts on user safety, and a tool that allows guardians to see how much time their kids spend on Instagram and set time limits, which some experts advise should be considered during this time.

    YouTube: On YouTube, the Family Link tool allows parents to set up supervised accounts for their children, screen time limits or block certain content. At the same time,YouTube Kids also provides a safer space for kids, and parents who decide their kids are ready to see more content on YouTube can create a supervised account. In addition, autoplay is turned off by default for anyone under 18 but can be turned off anytime in Settings for all users.

    Hamlet said families should consider creating a family policy where family members agree to delete their apps for a certain period of time.

    “It could be helpful to frame the idea as an experiment, where everyone is encouraged to share how not having the apps has made them feel over the course of time,” she said. “It is possible that after a few days of taking a break from social media, users may report feeling less anxious and overwhelmed, which could result in a family vote of continuing to keep the apps deleted for a few more days before checking in again.”

    If there’s resistance, Hamlet said should try to reduce the time spent on apps right now and come up with an agreed upon number of minutes each day for usage.

    “Parents could ideally include a contingency where in exchange for allowing the child to use their apps for a certain number of minutes, their child must agree to having a short check in to discuss whether there was any harmful content that the child had exposure to that day,” she said. “This exchange allows both parents to have a protected space to provide effective communication and support, and to model openness and care for their child.”

    TikTok: A TikTok spokesperson, which said the platform uses technology and 40,000 safety professionals to moderate the platform, told CNN it is taking the situation seriously and has increased dedicated resources to help prevent violent, hateful, or misleading content on the platform.

    Meta: Meta similarly said it has set up a special operations center staffed with experts, including fluent Hebrew and Arabic speakers, to monitor and respond to the situation. “Our teams are working around the clock to keep our platforms safe, take action on content that violates our policies or local law, and coordinate with third-party fact checkers in the region to limit the spread of misinformation,” Meta said in a statement. “We’ll continue this work as this conflict unfolds.”

    YouTube: Google-owned YouTube said it is providing thousands of age-restricted videos that do not violate its policies – some of these, however, are not appropriate for viewers under 18. (This may include bystander footage). The company told CNN it has “removed thousands of harmful videos” and its teams “remain vigilant to take action quickly across YouTube, including videos, Shorts and livestreams.”

    Source link

  • What is catfishing and what can you do if you are catfished? | CNN Business

    What is catfishing and what can you do if you are catfished? | CNN Business


    Editor’s Note: This story is part of ‘Systems Error’, a series by CNN As Equals, investigating how your gender shapes your life online. For information about how CNN As Equals is funded and more, check out our FAQs.



    CNN
     — 

    Catfishing is when a person uses false information and images to create a fake identity online with the intention to trick, harass, or scam another person. It is often on social media or dating apps and websites as a common tactic used to form online relationships under false pretenses, sometimes to lure people into financial scams.

    The person doing the pretending, or the “catfish” may also obtain intimate images from a victim and use them to extort or blackmail the person. This is known as sextortion, or they may use other personal information shared with them to commit identity theft.

    The term is believed to originate from the 2010 documentary “Catfish,” in which a young Nev Schulman starts an online relationship with teenager “Megan”, who turns out to be an older woman.

    In the final scene of the documentary, the woman’s husband shares an anecdote about how live cod used to be exported from Alaska alongside catfish, which kept the cod active and alert. He likened this to people in real life who keep others on their toes, like his wife. Schulman went on to produce the docuseries Catfish

    There are many reasons people resort to catfishing, but the most common reason is a lack of confidence, according to the Cybersmile Foundation, a nonprofit focused on digital well-being. The foundation states that if someone is not happy with themselves, they may feel happier when pretending to be someone more attractive to others.

    They may also hide their identity to troll someone; to engage in a relationship other than their existing one; or to extort or harass people. Some people may catfish to explore sexual preferences.

    Studies have shown that catfish are more likely to be educated men, with one 2022 study finding perpetrators are more likely to come from religious backgrounds, possibly providing a way to form relationships without the constraints they face in real life, the authors write.

    In another study published last year, Evita March, senior lecturer in psychology at Federation University in Australia, found that people with the strong personality traits of sadism, psychopathy, and narcissism were more likely to catfish.

    March told CNN the findings are preliminary and that her team would like to further investigate if certain personality traits lead to specific kinds of catfishing behavior.

    In the US, romance scams resulting from catfishing have among the highest reported financial losses of internet crimes as a whole. A total of 19,050 Americans reported losing almost $740 million to romance scammers in 2022.

    In the UK, the country’s National Fraud Intelligence Bureau received more than 8,000 reports of romance fraud in the 2022 financial year, totaling more than £92 million (US $116.6 million) lost, with an average loss of £11,500 (US $14,574) per victim.

    In Singapore, romance scams are among the top 10 reported scams. The reported amount of money catfish may get from their victims increased by more than 30% from SGD$33.1 million (US $24 million) in 2020 to $46.6 million (US $34 million) the following year.

    Catfishing is also increasingly happening on an industrial scale with the rise of “cyber scam centers” that have links to human trafficking in Southeast Asia, according to INTERPOL.

    Victims of trafficking are forced to become fraudsters by creating fake social media accounts and dating profiles to scam and extort millions of dollars from people around the world using different schemes such as fake crypto investment sites.

    Catfishing used to occur more among adults through online dating sites, but has now become equally common among teenagers, according to the Cybersmile Foundation.

    Research by Snapchat last year with more than 6,000 Gen Z teenagers and young people in Australia, France, Germany, India, the UK and the US found that almost two-thirds of them or their friends had been targeted by catfish or hackers to obtain private images that were later used to extort them.

    Older people are also likely to lose more money to catfishing. In 2021, Americans lost half a billion dollars through romance scams perpetrated by people using fake personas or impersonating others, with the largest losses paid in cryptocurrency, according to the US Federal Trade Commission. The number of reports rose tenfold among young people (18-29) but older people (over 70s) generally reported losing more money.

    In Australia, a third of dating and romance scams result in financial losses, with women having lost more than double the total amount lost by men, and older people again losing more money than those under 45., according to data from the country’s National Anti-Scam Centre.

    ”Romance scams are one of the hardest things to avoid. It’s emotional manipulation,” said Ngo Minh Hieu, a Vietnamese former hacker and founder of Chong Lua Dao (scam fighters), a cybersecurity non-profit.

    Since 2020, Hieu has been monitoring trends to help scam victims, he says, and explains that in his experience, a catfish would usually approach a victim with premediated intention to scam them.

    They were likely to be using personal information that they mine from the victim’s social media accounts, or may have bought that data from users in private chat groups simply by providing a phone number of a potential victim.

    There are many signs you can look for to help spot a catfish, experts say.

    Firstly, a catfish might contact you out of nowhere, start regular conversations with you and shower you with compliments to quickly build up trust and rapport. They may state desirable qualities in their opening conversations, including wealth or attractiveness, but then rarely or never call you, either over the phone or on a video call.

    They often do not have many friends on social media and their posts are usually scarce. Search results using their name may not yield many results and their stories are usually inconsistent. For example, personal details like where they live or go to school might change when discussed again.

    Another classic sign is if the feelings they declare for you escalate quickly and after a short period of time. A catfish may ask you for sensitive images and money.

    Many scammers use already available photos of other people in their fake personas, which may be possible to spot using a reverse image search.

    With the explosion of AI technology, scammers may now generate unique and realistic images for use as profile pictures. But Hieu explains that thanks to their built-in patterns by design, AI-generated images can be detected, using tools such as AI-Generated Image Detector.

    If you believe you are being catfished, there are steps you can take to protect yourself and help end the targeting.

    Experts advise that you should not be afraid to ask direct questions or challenge the person you believe may be catfishing you. You can do this by asking them why they are not willing to call you or meet face to face, or questioning how they can declare their love for you so quickly.

    Wang and her colleagues sent nearly 200 deterrent messages to active scammers in a 2020 study and concluded that this could make fraudsters respond less or in some cases, admit to wrongdoing.

    An example of one of the messages was: “I know you are scamming innocent people. My friend was recently arrested for the same offense and is facing five years in prison. You should stop before you face the same fate.”

    You should think about stopping all communications with the catfish, and refrain from sending money to them at the risk of further financial demands. Experts say catfish continue to target those who engage with them more.

    It’s also useful to secure your online accounts and ensure your personal information is kept private online.

    Cybersecurity expert Hieu explained that you can do this by putting personal information such as your phone number, email addresses and date of birth in private mode on social media. You can also check if your email has been compromised in a data breach by using tools such as the Have I Been Pwned website.

    Installing two-factor authentication on your accounts can also help protect against unauthorized access. That requires you to take a second step to verify your identity when logging in to a service, for example by SMS or a physical device, such as a key fob.

    Being subjected to catfishing can also have a significant impact on your mental health, with many victims left unable to trust others and some left feeling embarrassed about falling for the scam. A 2019 study found that young LGBTQ+ men in rural America experiencing catfishing on dating apps felt angry and fearful.

    If someone was “sextorted,” they may continue to fear their images resurfacing online in the future.

    March from Federation University in Australia recommended improving digital literacy and staying aware of the potential red flags. She also emphasized the need to recognize today’s loneliness epidemic, which “leads people to perhaps be more susceptible to catfishing scams,” she said.

    Seeking professional support from a counselor or talking to supportive friends and family is one way to address loneliness, March added.

    Catfishing is not explicitly a crime, but the actions that often accompany catfishing, such as extortion for money, gifts or sexual images are crimes in many places.

    The main challenge in tackling online fraud is the issue of jurisdiction, according to a 2020 paper about police handling of online fraud victims in Australia. Traditional policing operates within specific territories, but the internet has blurred these boundaries, the authors write.

    Cybercriminals from one country can also target victims in other countries, complicating law enforcement efforts, and victims often face difficulty and frustration when trying to report cybercrimes, which can further traumatize them.

    Fangzhou Wang, a cybercrime professor at the University of Texas at Arlington told CNN that virtual private networks (VPNs), forged credentials, and anonymous communication methods make it extremely difficult to determine identities or locations.

    Scammers have also capitalized on the proliferation of AI, such as AI-generated personas, which complicates the ability of law enforcement authorities to gather evidence and build cases against a catfish.

    ”Law enforcement agencies, often constrained by limited resources and prioritizing cases based on severity and direct impact, might not readily prioritize catfishing cases without substantial financial losses or physical harm,” Wang told CNN.

    In the US, there are some legal precedents. In 2022, a woman who had created multiple fake profiles to target wealthy men was charged with extortion, cyberstalking, and interstate threats and was sentenced in a plea deal last year.

    In the UK, while catfishing itself is not classified as a criminal offense, if the person using a fake profile engages in illegal activities, like financial gain or harassment, they can be punished by law.

    China has a law that implicates people who allow their websites or communications platforms to be used for frauds and other illegal activities under Article 46 in the Cybersecurity Law.

    If a catfish has tricked you into sending them money, you can go to the authorities and your bank immediately, depending on where you are.

    If activities that are crimes in your country have taken place because of being catfished, such as extortion, identify theft or harassment, the police or other authorities, such as specific commissions targeting online crime, may be your first port of call.

    The Australian government’s agency responsible for online safety, the e-safety commissioner, advises that people gather all the evidence they can, including screenshots of the scammer and chats with them to keep as evidence.

    Depending on the case, you can also submit an abuse or impersonation report against the catfish directly to the platform on which you are communicating with them.

    If you believe the person you are talking to is not who they say they are, most of the larger social media platforms give you the option report them for impersonation or other forms of abuse, including Facebook, Instagram, TikTok, X, Telegram, Tinder and WhatsApp. WeChat also offers a channel to report another user for harassment, fraud, or illegal activity, while Telegram creates an anti-scam thread for users to report on fraudsters.

    You are not responsible for the catfish behaviors of others, but staying vigilant and alert online goes a long way.

    Make sure your online accounts are secured and use two-factor authentication. When browsing the internet, you may want to use a virtual private network (VPN) which makes your internet activity harder to track.

    In many countries such as the US, the UK and Australia, victims have reported being preyed on by catfish who tricked them to put money in bogus cryptocurrency investment sites.

    If someone you have been talking to asks you to put money into an investment site, think twice. The Global Anti-Scam Organization has a database of fraudulent websites generated by their own investigations and the public’s tip offs to help inform you if you’re being scammed.

    If you are a parent, this guide provided by the UK-based National College platform suggests communicating effectively and sensitively with your children about the risks. You may also help them report and block the catfish accounts and report to police if they have been subjected to anything illegal or inappropriate.

    Because catfish get close to a target often by relying on personal information posted on social media, UNICEF asks children to consider their rights when it comes to parents sharing their pictures and other content online, especially when they are underage.





    Source link

  • Portable hotspots arrive in Maui to bring internet to residents and tourists | CNN Business

    Portable hotspots arrive in Maui to bring internet to residents and tourists | CNN Business


    New York
    CNN
     — 

    Portable mobile hotspots have arrived in Maui to help bring internet service to the thousands of people who may have been unable to call for help since the wildfires started to rage out of control on the island.

    Verizon told CNN on Thursday its teams are currently deploying the first batch of satellite-based mobile hotspots at evacuation sites in areas of greatest need, particularly the west side of the island, west of Maalaea, Lahaina and Northern Kapalua.

    Verizon’s larger equipment, which is being barged over from Honolulu, is expected to arrive later in the day. This includes COLTs (Cells on Light Trucks) — a mobile site on wheels that connects to a carrier’s service via a satellite link — and a specialized satellite trailer used to provide service to a cell site that has a damaged fiber connection.

    “Our team is closely monitoring the situation on the ground and our network performance,” a Verizon spokesperson told CNN. “Verizon engineers on the island are working to restore service in impacted areas as quickly and safely as possible.”

    The company said it is working closely with the Hawaii Emergency Management Agency and the Maui County Emergency Operations Center to prioritize its network recovery.

    Other carriers continue to mobilize their efforts, too. An AT&T spokesperson said it is working with local public safety officials to deploy SatCOLTs (Satellite Cells on Light Trucks), drones with cell support and other solutions across the island, as equipment comes in from neighboring islands.

    Meanwhile, a T-Mobile spokesperson said its cell sites are “holding up well during the fires” but commercial power outages may be disrupting the service for some customers. “As soon as conditions allow, our priority is to deploy teams with portable generators that will bring temporary power back to our sites,” the spokesperson said.

    The Maui disaster has already wiped out power to at least 14,000 homes and businesses in the area, according to PowerOutage.us. Many cell towers have backup power generators but they have limited capacity to keep towers running.

    “911 is down. Cell service is down. Phone service is down,” Hawaii Lt. Gov. Sylvia Luke told CNN on Wednesday morning.

    Verizon, T-Mobile and AT&T said they are waiving call, text and data overage charges for Maui residents during this time.

    Although strong winds can sometimes threaten cell towers, most are strong enough to handle the worst that even a Category 5 hurricane can bring. Fire, however, complicates the issue.

    “When the fires get too close to cell sites, they will obviously burn equipment, antennas, and feedlines,” said Glenn O’Donnell, VP of research at market research firm Forrester. “In extreme cases, they will also weaken the towers, leading some to collapse. The smoke and flames can also attenuate [reduce the strength of] signals because of the particulate density in the air.”

    If a tower collapses, cell networks could take months to be restored. But if carriers are able and prepared to do restorations with mobile backup units, it could bring limited service back within hours, O’Donnell said. Wireless carriers often bring in COWs (Cells On Wheels), COLTs and GOaTs (Generator on a Trailer) in emergencies to provide backup service when cell towers go down.

    Cell towers have backup technology built in, but this is typically done through optical fiber cables or microwave (wireless) links, according to Dimitris Mavrakis, senior researcher at ABI Research. However, if something extraordinary happens, such as interaction with rampant fires, these links may experience “catastrophic failures and leave cells without a connection to the rest of the world.”

    And, in an emergency, a spike in call volume can overload the system — if people are able to get reception.

    “Even cells that have a good service may experience outages due to the sheer volume of communication happening at once,” Mavrakis said. “Everyone in these areas may be trying to contact relatives or the authorities at once, saturating the network and causing an outage. This is easier to correct, though, and network operators may put in place additional measures to render them operational quickly.”

    Although it’s unclear how long cell phone service could be down in affected regions, companies have been able to bring connectivity to disaster regions in the past. In 2017, Google worked with AT&T and T-Mobile to deploy its Project Loon balloons to deliver internet service to Puerto Rico in the aftermath of Hurricane Maria.

    Project Loon has since shut down.

    Source link

  • Large US tech companies face new EU rules | CNN Business

    Large US tech companies face new EU rules | CNN Business



    CNN
     — 

    The world’s largest tech companies must comply with a sweeping new European law starting Friday that affects everything from social media moderation to targeted advertising and counterfeit goods in e-commerce — with possible ripple effects for the rest of the world.

    The unprecedented EU measures for online platforms will apply to companies including Amazon, Apple, Google, Meta, Microsoft, Snapchat and TikTok, among many others, reflecting one of the most comprehensive and ambitious efforts by policymakers anywhere to regulate tech giants through legislation. It could lead to fines for some companies and to changes in software affecting consumers.

    The rules seek to address some of the most serious concerns that critics of large tech platforms have raised in recent years, including the spread of misinformation and disinformation; possible harms to mental health, particularly for young people; rabbit holes of algorithmically recommended content and a lack of transparency; and the spread of illegal or fake products on virtual marketplaces.

    Although the European Union’s Digital Services Act (DSA) passed last year, companies have had until now to prepare for its enforcement. Friday marks the arrival of a key compliance deadline — after which tech platforms with more than 45 million EU users will have to meet the obligations laid out in the law.

    The EU also says the law intends “to establish a level playing field to foster innovation, growth and competitiveness both in the European Single Market and globally.” The action reinforces Europe’s position as a leader in checking the power of large US tech companies.

    For all platforms, not just the largest ones, the DSA bans data-driven targeted advertising aimed at children, as well as targeted ads to all internet users based on protected characteristics such as political affiliation, sexual orientation and ethnicity. The restrictions apply to all kinds of online ads, including commercial advertising, political advertising and issue advertising. (Some platforms had already in recent years rolled out restrictions on targeted advertising based on protected characteristics.)

    The law bans so-called “dark patterns,” or the use of subtle design cues that may be intended to nudge consumers toward giving up their personal data or making other decisions that a company might prefer. An example of a dark pattern commonly cited by consumer groups is when a company tries to persuade a user to opt into tracking by highlighting an acceptance button with bright colors, while simultaneously downplaying the option to opt out by minimizing that choice’s font size or placement.

    The law also requires all online platforms to offer ways for users to report illegal content and products and for them to appeal content moderation decisions. And it requires companies to spell out their terms of service in an accessible manner.

    For the largest platforms, the law goes further. Companies designated as Very Large Online Platforms or Very Large Online Search Engines will be required to undertake independent risk assessments focused on, for example, how bad actors might try to manipulate their platforms, or use them to interfere with elections or to violate human rights — and companies must act to mitigate those risks. And they will have to set up repositories of the ads they’ve run and allow the public to inspect them.

    Just a handful of companies are considered very large platforms under the law. But the list finalized in April includes the most powerful tech companies in the world, and, for those firms, violations can be expensive. The DSA permits EU officials to issue fines worth up to 6% of a very large platform’s global annual revenue. That could mean billions in fines for a company as large as Meta, which last year reported more than $116 billion in revenue.

    Companies have spent months preparing for the deadline. As recently as this month, TikTok rolled out a tool for reporting illegal content and said it would give EU users specific explanations when their content is removed. It also said it would stop showing ads to teens in Europe based on the data the company has collected on them, all to comply with the DSA rules.

    “We’ve been supportive of the objectives of the DSA and the creation of a regulatory regime in Europe that minimizes harm,” said Nick Clegg, Meta’s president of global affairs and a former deputy prime minister of the UK, in a statement Tuesday. He said Meta assembled a 1,000-person team to prepare for DSA requirements. He outlined several efforts by the company including limits on what data advertisers can see on teens ages 13 to 17 who use Facebook and Instagram. He said advertisers can no longer target the teens based on their activity on those platforms. “Age and location is now the only information about teens that advertisers can use to show them ads,” he said.

    In a statement, a Microsoft spokesperson told CNN the DSA deadline “is an important milestone in the fight against illegal content online. We are mindful of our heightened responsibilities in the EU as a major technology company and continue to work with the European Commission on meeting the requirements of the DSA.”

    Snapchat parent Snap told CNN that it is working closely with the European Commission to ensure the company is compliant with the new law. Snap has appointed several dedicated compliance employees to monitor whether it is living up to its obligations, the company said, and has already implemented several safeguards.

    And Apple said in a statement that the DSA’s goals “align with Apple’s goals to protect consumers from illegal and harmful content. We are working to implement the requirements of the DSA with user privacy and security as our continued North Star.”

    Google and Pinterest told CNN they have also been working closely with the European Commission.

    “We share the DSA’s goals of making the internet even more safe, transparent and accountable, while making sure that European users, creators and businesses continue to enjoy the benefits of the web,” a Google spokesperson said.

    A Pinterest spokesperson said the company would “continue to engage with the European Commission on the implementation of the DSA to ensure a smooth transition into the new legal framework.” The spokesperson added: “The wellbeing, safety and privacy of our users is a priority and we will continue to build on our efforts.”

    Many companies should be able to comply with the law, given their existing policies, teams and monitoring tools, according to Robert Grosvenor, a London-based managing director at the consulting firm Alvarez & Marsal. “Europe’s largest online service providers are not starting from ground zero,” Grosvenor said. But, he added: “Whether they are ready to become a highly regulated sector is another matter.”

    EU officials have signaled they will be scrutinizing companies for violations. Earlier this summer, European officials performed preemptive “stress tests” of X, the company formerly known as Twitter, as well as Meta and TikTok to determine the companies’ readiness for the DSA.

    For much of the year, EU Commissioner Thierry Breton has been publicly reminding X of its coming obligations as the company has backslid on some of its content moderation practices. Even as Breton concluded that X was taking its stress test seriously in June, the company had just lost a top content moderation official and had withdrawn from a voluntary EU commitment on disinformation that European officials had said would be part of any evaluation of a platform’s compliance with the DSA.

    X told CNN ahead of Friday’s deadline that it was on track to comply with the new law.

    Analysts anticipate that the EU will be watching even more closely after the deadline — and some hope that the rules will either encourage tech platforms to replicate their practices in the EU voluntarily around the world or else drive policymakers to adopt similar measures.

    “We hope that these new laws will inspire other jurisdictions to act because these are, after all, global companies which apply many of the same practices worldwide,” said Agustin Reyna, head of legal and economic affairs at BEUC, a European consumer advocacy group. “Europe got the ball rolling, but we need other jurisdictions to win the match against tech giants.”

    Already, Amazon has sought to challenge the very large platform label in court, arguing that the DSA’s requirements are geared toward ad-based online speech platforms, that Amazon is a retail platform and that none of its direct rivals in Europe have likewise been labeled, despite being larger than Amazon within individual EU countries.

    The legal fights could present the first major test of the DSA’s durability in the face of Big Tech’s enormous resources. Amazon told CNN that it plans to comply with the EU General Court’s decision, either way.

    “Amazon shares the goal of the European Commission to create a safe, predictable and trusted online environment, and we invest significantly in protecting our store from bad actors, illegal content, and in creating a trustworthy shopping experience,” an Amazon spokesperson said. “We have built on this strong foundation for DSA compliance.”

    TikTok did not immediately respond to a request for comment on this story.

    Source link

  • Chinese artists boycott big social media platform over AI-generated images | CNN Business

    Chinese artists boycott big social media platform over AI-generated images | CNN Business

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong
    CNN
     — 

    Artists across China are boycotting one of the country’s biggest social media platforms over complaints about its AI image generation tool.

    The controversy began in August when an illustrator who goes by the name Snow Fish accused the privately owned social media site Xiaohongshu of using her work to train its AI tool, Trik AI, without her knowledge or permission.

    Trik AI specializes in generating digital art in the style of traditional Chinese paintings; it is still undergoing testing and has not yet been formally launched.

    Snow Fish, whom CNN is identifying by her Xiaohongshu username for privacy reasons, said she first became aware of the issue when friends sent her posts of artwork from the platform that looked strikingly similar to her own style: sweeping brush-like strokes, bright pops of red and orange, and depictions of natural scenery.

    “Can you explain to me, Trik AI, why your AI-generated images are so similar to my original works?” Snow Fish wrote in a post which quickly circulated online among her followers and the artist community.

    The controversy erupted just weeks after China unveiled rules for generative AI, becoming one of the first governments to regulate the technology as countries around the world wrestle with AI’s potential impact on jobs, national security and intellectual property.

    Screenshots of AI-generated artworks on Xiaohongshu, taken by the artist Snow Fish.

    Trik AI and Xiaohongshu, which says it has 260 million monthly active users, do not publicize what materials are used to train the program and have not publicly commented on the allegations.

    The companies have not responded to multiple requests from CNN for comment.

    But Snow Fish said a person using the official Trik AI account had apologized to her in a private message, acknowledging that her art had been used to train the program and agreed to remove the posts in question. CNN has reviewed the messages.

    However, Snow Fish wants a public apology. The controversy has fueled online protests on the Chinese internet against the creation and use of AI-generated images, with several other artists claiming their works had been similarly used without their knowledge.

    Hundreds of artists have posted banners on Xiaohongshu saying “No to AI-generated images,” while a related hashtag has been viewed more than 35 million times on the Chinese Twitter-like platform Weibo.

    The boycott in China comes as debates about the use of AI in arts and entertainment are playing out globally, including in the United States, where striking writers and actors have ground most film and television production to a halt in recent months over a range of issues — including studios’ use of AI.

    Many of the artists boycotting Xiaohongshu have called for better rules to protect their work online — echoing similar complaints from artists around the world worried about their livelihoods.

    These concerns have grown as the race to develop AI heats up, with new tools developed and released almost faster than governments can regulate them — ranging from chatbots such as OpenAI’s ChatGPT to Google’s Bard.

    China’s tech giants, too, are rapidly developing their own generative artificial intelligence, from Baidu’s ERNIE Bot launched in March to SenseTime’s chatbot SenseChat.

    Besides Trik AI, Xiaohongshu has also developed a new function called “Ci Ke” which allows users to post content using AI-generated images.

    For artists like Snow Fish, the technology behind AI isn’t the problem, she said; it’s the way these tools use their work without permission or credit.

    Many AI models are trained from the work of human artists by quietly scraping images of their artwork from the internet without consent or compensation.

    Snow Fish added that these complaints had been slowly growing within the artist community but had mostly been privately shared rather than openly protested.

    “It’s an outbreak this time,” she said. “If it easily goes away without any splash, people will maintain silent, and those AI developers will keep harming our rights.”

    Another Chinese illustrator Zhang, who CNN is identifying by his last name for privacy reasons, joined the boycott in solidarity. “They’re shameless,” said Zhang. “They didn’t put in any effort themselves, they just took parts from other artists’ work and claimed it as their own, is that appropriate?”

    “In the future, AI images will only be cheaper in people’s eyes, like plastic bags. They will become widespread like plastic pollution,” he said, adding that tech leaders and AI developers care more about their own profits than about artists’ rights.

    Tianxiang He, an associate professor of law City University of Hong Kong, said the use of AI-generated images also raises larger questions among the artistic community about what counts as “real” art, and how to preserve its “spiritual value.”

    Similar boycotts have been seen elsewhere around the world, against popular AI image generation tools such as Stable Diffusion, released last year by London-based Stability AI, and California-based Midjourney.

    Stable Diffusion is embroiled in an ongoing lawsuit brought by stock image giant Getty Images, alleging copyright infringement.

    Fareed Zakaria special MoMA AI Art

    GPS web extra: How does AI make art?

    Despite the speed at which AI image generation tools are being developed, there is “no global consensus about how to regulate this kind of training behavior,” said He.

    He added that many such tools are developed by tech giants who own huge databases, which allows them to “do a lot of things … and they don’t care whether it’s protected by the law or not.”

    Because Trik AI has a smaller database to pull from, the similarities between its AI-generated content and artists’ original works are more obvious, making an easier legal case, he said.

    Cases of copyright infringement would be harder to detect if more works were put in a larger database, he added.

    Governments around the world are now grappling with how to set global standards for the wide-ranging technology. The European Union was one of the first in the world to set rules in June on how companies can use AI, with the United States still holding discussions with Capitol Hill lawmakers and tech companies to develop legislation.

    China was also an early adopter of AI regulation, publishing new rules that took effect in August. But the final version relaxed some of the language that had been included in earlier drafts.

    Experts say major powers like China likely prioritize centralizing power from tech giants when drafting regulations, and pulling ahead in the global tech race, rather than focusing on individuals’ rights.

    He, the Hong Kong law professor, called the regulations a “very broad general regulatory framework” that provide “no specific control mechanisms” to regulate data mining.

    “China is very hesitant to enact anything related to say yes or no to data mining, because that will be very dangerous,” he said, adding that such a law could strike a blow to the emerging market, amid an already slow national economy.

    Source link

  • NY officials announce legislation aimed at protecting kids on social media | CNN Business

    NY officials announce legislation aimed at protecting kids on social media | CNN Business



    CNN
     — 

    Two new bills meant to protect children’s mental health online by changing the way they are served content on social media and by limiting companies’ use of their data will be introduced in the New York state legislature, state and city leaders said Wednesday.

    New York Gov. Kathy Hochul and New York Attorney General Letitia James made the announcement at the headquarters of the United Federation of Teachers Manhattan, joined by UFT President Michael Mulgrew, State Senator Andrew Gounardes, Assemblywoman Nily Rozic and community advocates.

    “Our children are in crisis, and it is up to us to save them,” Hochul said, comparing social media algorithms to cigarettes and alcohol. “The data around the negative effects of social media on these young minds is irrefutable, and knowing how dangerous the algorithms are, I will not accept that we are powerless to do anything about it.”

    The “Stop Addictive Feeds Exploitation (SAFE) for Kids Act” would limit what New York officials say are the harmful and addictive features of social media for children. The act would allow users under 18 and their parents to opt out of receiving feeds driven by algorithms designed to harness users’ personal data to keep them on the platforms for as long as possible. Those who opt out would receive chronological feeds instead, like in the early days of social media.

    The bill would also allow users and parents who opt in to receiving algorithmically generated content feeds to block access to social media platforms between 12am and 6am or to limit the total number of hours per day a minor can spend on a platform.

    “This is a major issue that we all feel strongly about and that must be addressed,” James said. “Nationwide, children and teens are struggling with significantly high rates of depression, anxiety, suicidal thoughts and other mental health issues, largely because of social media.”

    The bill targets platforms like Facebook, Instagram, TikTok, Twitter and YouTube, where feeds are comprised of user-generated content along with other material the platform suggests to users based on their personal data. Tech platforms have designed and promoted voluntary tools aimed at parents to help them control what content their kids can see, arguing that the decision about what boundaries to set should be up to individual families. But that hasn’t stopped critics from calling on platforms to do more — or from threatening further regulation.

    “Our children deserve a safer and more secure environment online, free from addictive algorithms and exploitation,” said Gounardes. “Algorithms are the new tobacco. Simple as that.”

    The New York legislation comes amid a raft of similar bills across the country that purport to safeguard young users by imposing tough new rules on platforms.

    States including Arkansas, Louisiana and Utah have passed bills requiring tech platforms to obtain a parent’s consent before creating accounts for teens. Federal lawmakers have introduced a similar bill that would ban kids under 13 from using social media altogether. And numerous lawsuits against social media platforms have accused the companies of harming users’ mental health. The latest of these suits came on Tuesday, when Utah’s attorney general sued TikTok for allegedly misleading consumers about the app’s safety.

    Mulgrew called the New York legislation necessary in part due to a lack of action by the federal government to protect kids.

    “The last time, first and only time that the United States government passed a bill to protect children in social media was 1998,” Mulgrew said, referring to the Children’s Online Privacy Protection Act (COPPA), a federal law that prohibits the collection of personal data from Americans under the age of 13 without parental consent. In July, the US Senate commerce committee voted to advance a bill that would expand COPPA’s protections to teens for the first time.

    New York officials on Wednesday also highlighted risks to children’s privacy online, including the chance their location or other personal data could fall into the hands of human traffickers and others who might prey on youth.

    “While other states and countries have enacted laws to limit the personal data that online platforms can collect from minors, no such restrictions currently exist in New York,” a press release from earlier Wednesday stated. “The two pieces of legislation introduced today will add critical protections for children and young adults online.”

    The New York Child Data Protection Act would protect children’s data online by prohibiting all online sites from collecting, using, sharing or selling the personal data of anyone under 18 for the purposes of advertising, without informed consent or unless doing so is strictly necessary for the purpose of the website. For users under 13, this informed consent must come from a parent or guardian.

    Both bills would authorize the attorney general to bring an action to enjoin or seek damages or civil penalties of up to $5,000 per violation and would allow parents or guardians of minors to sue for damages of up to $5,000 per user incident or for actual damages, whichever is greater.

    The US Department of Health and Human Services says that while social media provides some benefits, it also presents “a meaningful risk of harm to youth.” The Surgeon General’s Social Media and Youth Mental Health Advisory released in May said children and adolescents who spend more than three hours a day on social media face double the risk of mental health problems like depression and anxiety, a finding the report called “concerning” given a recent survey that showed teens spend an average of 3.5 hours a day on social media.

    Source link

  • Biden administration defends communications with social media companies in high-stakes court fight | CNN Business

    Biden administration defends communications with social media companies in high-stakes court fight | CNN Business


    Washington, DC
    CNN
     — 

    The Biden administration on Thursday defended its communications with social media giants in court, arguing those channels must stay open so that the federal government can help protect the public from threats to election security, Covid-19 misinformation and other dangers.

    The closely watched court fight reflects how social media has become an informational battleground for major social issues. It has revealed the messy challenges for social media companies as they try to manage the massive amounts of information on their platforms.

    And it has highlighted warnings by independent researchers, watchdog groups and government officials that malicious actors will continue to try to disrupt the country’s democracy by flooding the internet with bogus and divisive material ahead of the 2024 elections.

    In oral arguments before a New Orleans-based federal appeals court, the US government challenged a July injunction that blocked several federal agencies from discussing certain social media posts and sharing other information with online platforms, amid allegations by state governments that those communications amounted to a form of unconstitutional censorship.

    The appeals court last month temporarily blocked the injunction from taking effect. But the outcome of Thursday’s arguments will determine the ultimate fate of the order, which placed new limits on the Departments of Homeland Security, Health and Human Services and other federal agencies’ ability to coordinate with tech companies and civil society groups.

    If upheld by the US Court of Appeals for the Fifth Circuit, the injunction would suppress a broad range of public-private partnerships and undermine the US government’s mission to protect the public, the Biden administration argued.

    “For example, if there were a natural disaster, and there were untrue statements circulating on social media that were damaging to the public interest, the government would be powerless under the injunction to discourage social media companies from further disseminating those incorrect statements,” said Daniel Tenny, a Justice Department lawyer.

    Now, a three-judge panel of the Fifth Circuit is set to decide how executive agencies may respond to those threats.

    At issue is whether the US government unconstitutionally pressured social media platforms into censoring users’ speech, particularly when the government flagged posts to the platforms that it believed violated the companies’ own terms of service.

    During more than an hour of oral arguments Thursday, the three judges handling the appeal gave little indication of how they would rule in the case, with one judge asking just a couple of questions during the hearing. The other two spent much of the time pressing attorneys for the Biden administration and the plaintiffs in the case on issues concerning the scope of the injunction and whether the states even had the legal right – or standing – to bring the lawsuit.

    Before them is not only the request to reverse the lower court injunction, but also one from the administration to issue a more lasting pause on that injunction while the judges weigh the challenge to it.

    In briefs submitted to the court ahead of Thursday’s hearing, the Biden administration argued that a lower court judge was wrong to have identified the government communications with social media companies as potentially, in his words, “the most massive attack against free speech in United States’ [sic] history.”

    “There is a categorical, well-settled distinction between persuasion and coercion,” the administration’s lawyers wrote, adding that the lower court “equated legitimate efforts at persuasion with illicit efforts to coerce.”

    The administration’s opponents in the case, which include the states of Missouri and Louisiana, have argued that the federal government’s communications with social media companies are a violation of the First Amendment because even “‘encouragement short of compulsion’ can transform private conduct [by social media companies] into government action” that infringes on users’ speech rights.

    “Every one of these federal agencies has insinuated themselves into the content moderation decisions of major social media platforms,” D. John Sauer, an attorney representing the state of Louisiana, told the judges on Thursday. Hypothetically speaking, he added: “The Surgeon General can say, ‘All this speech is terrible, it’s awful.’ …. But what he can’t do is pick up the phone and say, ‘Take it down.’”

    In addition to the states, five individuals are also plaintiffs in the suit. They include three doctors who have been critical of state and federal pandemic-era restrictions, a Louisiana woman who claims she was censored by social media companies for her online criticisms of Covid health measures and a man who runs a far-right website known for pushing conspiracy theories.

    Much of Thursday’s oral arguments hinged on the definition of coercive communication and how courts have analyzed government pressure against private parties in past cases.

    But the states also claimed that there could be a pathway to finding a constitutional violation if the court agreed that social media companies, in heeding the administration’s calls to action, had been effectively turned into agents of the US government.

    In the past month, after District Judge Terry Doughty issued his injunction, current and former US officials, along with outside researchers and academics, have worried that the order could lead to a chilling effect for efforts to protect US elections.

    “There is no serious dispute that foreign adversaries have and continue to attempt to interfere in our elections and that they use social media to do it,” FBI Director Christopher Wray testified to the House Judiciary Committee in July. “President Trump himself in 2018 declared a national emergency to that very effect, and the Senate Intelligence Committee — in a bipartisan, overwhelmingly bipartisan way — not only found the same thing but called for more information-sharing between us and the social media.”

    Ohio Republican Rep. Jim Jordan, the panel’s chair, remains unconvinced. Earlier this week, he and other Republican lawmakers filed their own brief to the appeals court, accusing the Biden administration of a campaign to stifle speech.

    “On issue after issue, the Biden Administration has distorted the free marketplace of ideas promised by the First Amendment, bringing the weight of federal authority to bear on any speech it dislikes—including memes and jokes,” Jordan and the other lawmakers wrote. “Of course, Big Tech companies often required little coercion to do the Administration’s bidding on some issues. Generally eager to please their ideological allies and overseers in the federal government, these companies and other private entities have repeatedly censored accurate speech on important public issues.”

    Source link

  • Here’s what Donald Trump’s return to X could mean for the platform’s business | CNN Business

    Here’s what Donald Trump’s return to X could mean for the platform’s business | CNN Business


    New York
    CNN
     — 

    Nine months after Elon Musk reinstated Donald Trump’s account on the social network previously known as Twitter, the former president has returned to what was once his platform of choice for communicating with the country.

    The return of Trump – who used to be one of the site’s most prominent, if controversial, users – could mark a turning point for the company now called X after months of turbulence. Trump, who has nearly 87 million followers, could attract a wide set of viewers, especially in the lead up to the 2024 presidential election, where he is the front-runner for the Republican nomination. But it could also present a new set of challenges for the social network, including for its effort to revive its ad business, if Trump decides to resume regularly posting on the platform at all.

    Trump on Thursday night posted on the platform for the first time since January 2021, when he was suspended for violating Twitter’s rules against glorification of violence in the wake of the January 6, 2021, attack on the US Capitol. On Thursday, he posted a photo of his mug shot – the first such photo of a US president in history – after his surrender in Georgia on more than a dozen charges stemming from his efforts to reverse the 2020 election results there. He also added a link to a fundraiser.

    Trump’s return appeared to be welcomed by X owner Musk, who has been encouraging politicians and public figures to post on the site in a bid to improve user numbers. He shared Trump’s X post saying, “Next-level.” Later, appearing to reference the former president without explicitly naming him, Musk posted that “the speed at which your message on this platform can reach a vast number of people is mind-blowing.”

    X declined to comment for this story.

    If Trump decides to return to regularly posting on X, it could be a major boon to the platform’s effort to attract an audience as it faces increased competition. In the wake of controversial policy decisions by Musk, a slew of Twitter copycats have popped up as users seek alternative platforms, including Meta’s Threads, which rolled out a key update this week. The week of July 17, traffic to then-Twitter was down more than 9% compared to the same period in the prior year, according to the most recent public report from web traffic intelligence firm Similarweb.

    Musk’s changes at the company have also irked some advertisers, weighing on X’s core business.

    When he was president, Trump’s posts on what was then Twitter often moved the markets, set the news cycle and drove the agenda in Washington – a fact that benefited the company in the form of countless hours of user engagement and almost certainly could again. And while Trump has remained mostly on his own platform, Truth Social, since he was suspended from many mainstream social networks in early 2021, X would give him a larger reach as he vies for the 2024 Republican nomination.

    Trump’s return “should have a positive impact on [X’s] engagement at a time when it needs it,” D.A. Davidson analyst Tom Forte told CNN in an email Friday.

    (It’s not clear how Musk – who has often been X’s main character since his takeover, thanks in some cases to his own policy decisions – would feel about sharing the spotlight.)

    That engagement could be a selling point for X in its quest to lure advertisers back to the platform. But Trump’s return could also raise fresh concerns for advertisers, some of whom have pulled back their spending on the platform over fears that their ads could run next to controversial or potentially objectionable content as Musk has reduced content moderation on the site.

    Musk said last month that the company still had negative cash flow because of a 50% decline in revenue from its core ad business, although CEO Linda Yaccarino said weeks later the company is now “close to break-even.”

    And while X’s leadership has said advertisers are returning thanks to new brand safety controls, at least two brands recently paused their spending on the platform after their ads were run alongside an account celebrating the Nazi party. (X suspended the account after it was flagged and said ad impressions on the page were minimal.)

    Trump frequently pushed boundaries when he was active on Twitter. For years, the platform took a light-touch approach to moderating his account, arguing at times that as a public official, the then-president must be given wide latitude to speak. Now, if Trump returns to his old habits – the former president has, for example, continued to falsely claim in posts on Truth Social that the 2020 election was stolen – Musk could be forced to decide whether to risk alienating additional advertisers or compromise his stated commitment to “free speech.”

    Forte said he will be closely watching the impact of Trump’s return on Twitter’s advertising business. “The increased engagement should be favorable, but there is a risk that heightened controversy could hamper ad sales,” he said.

    And it’s not yet clear whether Trump will actually return to being active on X beyond Thursday’s post, which was essentially a fundraising appeal, and similar to what he posted on Truth Social. After Facebook restored Trump’s account earlier this year, many of his posts on that platform have been aimed at directing users to donate or volunteer for his campaign.

    What’s more, after making his return to X, Trump appeared to try to clarify where his loyalty lies. “I LOVE TRUTH SOCIAL. IT IS MY HOME!!” Trump posted on the X competitor platform.

    Source link

  • ‘Where is the phone?’ Huawei keeps quiet about Mate 60 Pro but takes aim at Tesla | CNN Business

    ‘Where is the phone?’ Huawei keeps quiet about Mate 60 Pro but takes aim at Tesla | CNN Business

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong
    CNN
     — 

    Huawei has disappointed legions of fans — and US officials — eager to know more about its Mate 60 Pro smartphone, which has quickly become a symbol of the tech rivalry between the United States and China since it went on sale last month.

    Huawei’s consumer chief, Richard Yu, showed off a slew of new products including a tablet, smartwatch, earphones and even a challenge to Tesla (TSLA) on Monday, without going into detail about its flagship device, which has provoked calls in Washington for more sanctions against the Chinese tech and mobile giant.

    The United States has spent years trying to hobble Huawei’s ability to access the most advanced semiconductors, and the unveiling of its 5G phone in August has taken Western observers by surprise.

    The launch event became the most discussed topic on Chinese social network Weibo, racking up six billion views and 1.6 million posts. Meanwhile, a hashtag titled “#HuaweiConferenceWithoutMentioningMobilePhones,” trended on Weibo, with 24.5 million views.

    “You’re telling me there will be no talk about the phone?” one user wrote on the social network.

    “Where is the phone?” said another.

    Huawei quietly started selling the Mate 60 Pro in August, without a formal launch event or sharing full technical specifications.

    Yu said onstage that the company was “working overtime” to urgently produce devices in the Mate 60 series “to allow more people to buy and use our products.”

    But “today, we will not introduce” those devices, he added.

    At one point, Huawei whetted viewers’ appetite by unveiling a new premium collection called Ultimate Design, introduced by Hong Kong singer and actor Andy Lau.

    The line consists of a luxury smartphone and smartwatch. Few details were released, though the company said the watch was made using bars of real gold — giving it a hefty price tag of 21,999 Chinese yuan ($3,009).

    Ben Sin, an independent tech reviewer, said he was “baffled” as to why Huawei did not discuss its smartphones.

    The company “knows everyone wants to know more about the chip [in the Mate 60 Pro], so them not talking about it is almost like defiance,” he said.

    Analysts who have examined the handset have said it includes a 5G chip, suggesting Huawei may have found a way to overcome American export controls.

    Huawei, formerly the world’s second largest maker of smartphones, has been attempting a comeback in China’s smartphone market after being hit by US export restrictions, which were first imposed in 2019.

    The company’s woes later forced it to sell off its budget mobile brand, Honor, leaving it in bad shape.

    But it is starting to find its way back.

    The firm’s smartphone sales grew in China by 58% in the second quarter of this year, compared to the same period last year, according to Counterpoint Research. Its share of the Chinese market rose from 6.9% to 11.3% over that period.

    Ivan Lam, a senior analyst at Counterpoint, said Huawei benefited from “its high brand exposure to” wealthy Chinese consumers. Because of this, Huawei’s market share in China is expected to further grow in 2024, he added.

    Huawei’s new phone is a boon for the company and may even pose a challenge to Apple’s (AAPL) market share in China, Lam said.

    The Shenzhen-based company has seen a recent “surge in sales” for its Mate 60 series, with weekly sales almost tripling to 225,000 units, according to Counterpoint.

    Yu demonstrated a number of other new products, starting with the latest version of its MatePad Pro, describing it as the lightest and thinnest tablet of its kind in the world. He said the device had been 10 years in the making.

    In addition, the company unveiled a new smart TV, wireless earphones and other gadgets.

    Huawei also took an aggressive swipe at Tesla, saying it would release its first sedan, the Luxeed S7, in November. The car will surpass Tesla’s Model S “in every specification,” said Yu.

    The company plans to release the Aito M9, an SUV, in December. Huawei has partnered with Chinese automakers to produce the two previously announced electric vehicles.

    Yu also announced Huawei was “ready to launch” an updated operating system, HarmonyOS NEXT.

    The system will include “native applications,” Yu said, without elaborating.

    Speculation has mounted that Huawei may be building an operating system that won’t be compatible with any Android apps.

    Huawei did not immediately respond to a request for comment on the matter.

    Source link

  • The Israel-Hamas war reveals how social media sells you the illusion of reality | CNN Business

    The Israel-Hamas war reveals how social media sells you the illusion of reality | CNN Business


    New York
    CNN
     — 

    As the Israel-Hamas war reaches the end of its first week, millions have turned to platforms including TikTok and Instagram in hopes of comprehending the brutal conflict in real time. Trending search terms on TikTok in recent days illustrate the hunger for frontline perspectives: From “graphic Israel footage” to “live stream in Israel right now,” internet users are seeking out raw, unfiltered accounts of a crisis they are desperate to understand.

    For the most part, they are succeeding, discovering videos of tearful Israeli children wrestling with the permanence of death alongside images of dazed Gazans sitting in the rubble of their former homes. But that same demand for an intimate view of the war has created ample openings for disinformation peddlers, conspiracy theorists and propaganda artists — malign influences that regulators and researchers now warn pose a dangerous threat to public debates about the war.

    One recent TikTok video, seen by more than 300,000 users and reviewed by CNN, promoted conspiracy theories about the origins of the Hamas attacks, including false claims that they were orchestrated by the media. Another, viewed more than 100,000 times, shows a clip from the video game “Arma 3” with the caption, “The war of Israel.” (Some users in the comments of that video noted they had seen the footage circulating before — when Russia invaded Ukraine.)

    TikTok is hardly alone. One post on X, formerly Twitter, was viewed more than 20,000 times and flagged as misleading by London-based social media watchdog Reset for purporting to show Israelis staging civilian deaths for cameras. Another X post the group flagged, viewed 55,000 times, was an antisemitic meme featuring Pepe the Frog, a cartoon that has been appropriated by far-right white supremacists. On Instagram, a widely shared and viewed video of parachuters dropping in on a crowd and captioned “imagine attending a music festival when Hamas parachutes in” was debunked over the weekend and, in fact, showed unrelated parachute jumpers in Egypt. (Instagram later labeled the video as false.)

    This week, European Union officials sent warnings to TikTok, Facebook and Instagram-parent Meta, YouTube and X, highlighting reports of misleading or illegal content about the war on their platforms and reminding the social media companies they could face billions of dollars in fines if an investigation later determines they violated EU content moderation laws. US and UK lawmakers have also called on those platforms to ensure they are enforcing their rules against hateful and illegal content.

    Since the violence in Israel began, Imran Ahmed, founder and CEO of the social media watchdog group Center for Countering Digital Hate, told CNN his group has tracked a spike in efforts to pollute the information ecosystem surrounding the conflict.

    “Getting information from social media is likely to lead to you being severely disinformed,” said Ahmed.

    Everyone from US foreign adversaries to domestic extremists to internet trolls and “engagement farmers” has been exploiting the war on social media for their own personal or political gain, he added.

    “Bad actors surrounding us have been manipulating, confusing and trying to create deception on social media platforms,” Dan Brahmy, CEO of the Israeli social media threat intelligence firm Cyabra, said Thursday in a video posted to LinkedIn. “If you are not sure of the trustworthiness [of content] … do not share,” he said.

    ‘Upticks in Islamophobic and antisemitic narratives’

    Graham Brookie, senior director of the Digital Forensic Research Lab at the Atlantic Council in Washington, DC, told CNN his team has witnessed a similar phenomenon. The trend includes a wave of first-party terrorist propaganda, content depicting graphic violence, misleading and outright false claims, and hate speech – particularly “upticks in specific and general Islamophobic and antisemitic narratives.”

    Much of the most extreme content, he said, has been circulating on Telegram, the messaging app with few content moderation controls and a format that facilitates quick and efficient distribution of propaganda or graphic material to a large, dedicated audience. But in much the same way that TikTok videos are frequently copied and rebroadcast on other platforms, content shared on Telegram and other more fringe sites can easily find a pipeline onto mainstream social media or draw in curious users from major sites. (Telegram didn’t respond to a request for comment.)

    Schools in Israel, the United Kingdom and the United States this week urged parents to delete their children’s social media apps over concerns that Hamas will broadcast or disseminate disturbing videos of hostages who have been seized in recent days. Photos of dead or bloodied bodies, including those of children, have already spread across Facebook, Instagram, TikTok and X this week.

    And tech watchdog group Campaign for Accountability on Thursday released a report identifying several accounts on X sharing apparent propaganda videos with Hamas iconography or linking to official Hamas websites. Earlier in the week, X faced criticism for videos unrelated to the war being presented as on-the-ground footage and for a post from owner Elon Musk directing users to follow accounts that previously shared misinformation (Musk’s post was later deleted, and the videos were labeled using X’s “community notes” feature.)

    Some platforms are in a better position to combat these threats than others. Widespread layoffs across the tech industry, including at some social media companies’ ethics and safety teams, risk leaving the platforms less prepared at a critical moment, misinformation experts say. Much of the content related to the war is also spreading in Arabic and Hebrew, testing the platforms’ capacity to moderate non-English content, where enforcement has historically been less robust than in English-language content.

    “Of course, platforms have improved over the years. Communication & info sharing mechanisms exist that did not in years past. But they have also never been tested like this,” Brian Fishman, the co-founder of trust and safety platform Cinder who formerly led Facebook’s counterterrorism efforts, said Wednesday in a post on Threads. “Platforms that kept strong teams in place will be pushed to the limit; platforms that did not will be pushed past it.”

    Linda Yaccarino, the CEO of X, said in a letter Wednesday to the European Commission that the platform has “identified and removed hundreds of Hamas-related accounts” and is working with several third-party groups to prevent terrorist content from spreading. “We’ve diligently taken proactive actions to remove content that violates our policies, including: violent speech, manipulated media and graphic media,” she said. The European Commission on Thursday formally opened an investigation into X following its earlier warning about disinformation and illegal content linked to the war.

    Meta spokesperson Andy Stone said that since Hamas’ initial attacks, the company has established “a special operations center staffed with experts, including fluent Hebrew and Arabic speakers, to closely monitor and respond to this rapidly evolving situation. Our teams are working around the clock to keep our platforms safe, take action on content that violates our policies or local law, and coordinate with third-party fact checkers in the region to limit the spread of misinformation. We’ll continue this work as this conflict unfolds.”

    YouTube, for its part, says its teams have removed thousands of videos since the attack began, and continues to monitor for hate speech, extremism, graphic imagery and other content that violates its policies. The platform is also surfacing almost entirely videos from mainstream news organizations in searches related to the war.

    Snapchat told CNN that its misinformation team is closely watching content coming out of the region, making sure it is within the platform’s community guidelines, which prohibits misinformation, hate speech, terrorism, graphic violence and extremism.

    TikTok did not respond to a request for comment on this story.

    Large tech platforms are now subject to content-related regulation under a new EU law called the Digital Services Act, which requires them to prevent the spread of mis- and disinformation, address rabbit holes of algorithmically recommended content and avoid possible harms to user mental health. But in such a contentious moment, platforms that take too heavy a hand in moderation could risk backlash and accusations of bias from users.

    Platforms’ algorithms and business models — which generally rely on the promotion of content most likely to garner significant engagement — can aid bad actors who design content to capitalize on that structure, Ahmed said. Other product choices, such as X’s moves to allow any user to pay for a subscription for a blue “verification” checkmark that grants an algorithmic boost to post visibility, and to remove the headlines from links to news articles, can further manipulate how users perceive a news event.

    “It’s time to break the emergency glass,” Ahmed said, calling on platforms to “switch off the engagement-driven algorithms.” He added: “Disinformation factories are going to cause geopolitical instability and put Jews and Muslims at harm in the coming weeks.”

    Even as social media companies work to hide the absolute worst content from their users — whether out of a commitment to regulation, advertisers’ brand safety concerns, or their own editorial judgments — users’ continued appetite for gritty, close-up dispatches from Israelis and Palestinians on the ground is forcing platforms to walk a fine line.

    “Platforms are caught in this demand dynamic where users want the latest and the most granular, or the most ‘real’ content or information about events, including terrorist attacks,” Brookie said.

    The dynamic simultaneously highlights the business models of social media and the role the companies play in carefully calibrating their users’ experiences. The very algorithms that are widely criticized elsewhere for serving up the most outrageous, polarizing and inflammatory content are now the same ones that, in this situation, appear to be giving users exactly what they want.

    But closeness to a situation is not the same thing as authenticity or objectivity, Ahmed and Brookie said, and the wave of misinformation flooding social media right now underscores the dangers of conflating them.

    Despite giving the impression of reality and truthfulness, Brookie said, individual stories and combat footage conveyed through social media often lack the broader perspective and context that journalists, research organizations and even social media moderation teams apply to a situation to help achieve a fuller understanding of it.

    “It’s my opinion that users can interact with the world as it is — and understand the latest, most accurate information from any given event — without having to wade through, on an individual basis, all of the worst possible content about that event,” Brookie said.

    Potentially exacerbating the messy information ecosystem is a culture on social media platforms that often encourages users to bear witness to and share information about the crisis as a way of signaling their personal stance, whether or not they are deeply informed. That can lead even well-intentioned users to unwittingly share misleading information or highly emotional content created with the intention of collecting views or monetizing highly engaging content.

    “Be very cautious about sharing in the middle of a major world event,” Ahmed said. “There are people trying to get you to share bullsh*t, lies, which are designed to inculcate you to hate or to misinform you. And so sharing stuff that you’re not sure about is not helping people, it’s actually really harming them and it contributes to an overall sense that no one can trust what they’re seeing.”

    Source link

  • Illinois Supreme Court upholds state’s assault-style weapons ban | CNN Politics

    Illinois Supreme Court upholds state’s assault-style weapons ban | CNN Politics



    CNN
     — 

    The Illinois Supreme Court on Friday upheld the state’s assault-style weapons ban in a 4-3 ruling after months of legal challenges sought to dismantle the law.

    State lawmakers in January passed, and Democratic Gov. J.B. Pritzker signed into law, a measure to ban assault-style rifles and high-capacity magazines. Those who already own such rifles face limitations on their sale and transfer and must register them with the Illinois State Police by 2024.

    That law – which came about six months after the July 2022 Highland Park, Illinois, shooting – faced immediate lawsuits in state and federal court that argued it violated the Illinois and US constitutions.

    A Macon County Circuit Court judge found earlier this year that exemptions to the law, including for law enforcement officers and armed guards at federally supervised nuclear sites, violated the equal protection clause of the state’s constitution.

    The Illinois Supreme Court agreed to fast-track the state’s appeal, and in a 20-page opinion, reversed the circuit court’s judgment. The majority’s opinion claimed to focus on two core issues brought by the plaintiffs: Whether the law violated the plaintiffs’ right to equal protection and if it constituted special legislation that created laws for some firearms owners and not others. The majority opinion notably did not decide if the ban violated the Second Amendment, asserting that the plaintiffs had waived this issue.

    “We express no opinion on the potential viability of plaintiffs’ waived claim concerning the Second Amendment,” they wrote.

    However, one of the plaintiffs’ attorneys, Jerry Stocks, told CNN the majority justices misrepresented their arguments. Stocks said the Second Amendment is a fundamental right inextricably linked to their arguments and thus should have weighed heavily on scrutiny of the ban. Ignoring the issue altogether was improper, he said.

    “We have a circus in Illinois and the clowns are in charge right now,” Stocks said.

    Illinois Attorney General Kwame Raoul said the new law is a “critical part” of the state’s efforts to combat gun violence, and Pritzker’s office hailed the decision to uphold “a commonsense gun reform law to keep mass-killing machines off of our streets and out of our schools, malls, parks, and places of worship.”

    Nancy Rotering, the Democratic mayor of Highland Park, called on Congress to act on tougher federal restrictions and said Friday’s decision “sends a message to residents that saving lives takes precedence over thoughts and prayers and acknowledges the importance of sensible gun control measures.”

    Illinois has struggled to restrict the flow of illegal guns, particularly in Chicago, while officials in the state have faced legal hurdles to implementing new gun restrictions.

    Despite gun rights advocates challenging the assault-style weapons ban and asking the US Supreme Court to block the ban – along with a city ordinance passed last year by Naperville, Illinois, that bans the sale of assault rifles – the US Supreme Court in May refused to intervene.

    This story has been updated with additional details.

    Source link

  • Maui conspiracy theories are spreading on social media. Why this always happens after a disaster | CNN Business

    Maui conspiracy theories are spreading on social media. Why this always happens after a disaster | CNN Business



    CNN
     — 

    A slew of viral conspiracy videos on social media have made baseless claims that the Maui wildfires were started intentionally as part of a land grab, highlighting how quickly misinformation spreads after a disaster.

    While the cause of the fires hasn’t been determined, Hawaiian Electric — the major power company on Maui — is under scrutiny for not shutting down power lines when high winds created dangerous fire conditions. (Hawaiian Electric previously said both the company and the state are conducting investigations into what happened). Maui experienced high winds from Hurricane Dora in the south while it was also grappling with a drought. Wildfires across the region have long been a concern.

    Still, conspiracy theories continue to circulate as nearly 400 people are still unaccounted for.

    It’s not uncommon for conspiracy theories to make the rounds after a national crisis. According to Renee DiResta, a research manager at Stanford University who studies misinformation, people often look for a way to make sense of the world when they are anxious or have a feeling of powerlessness.

    “Theories that attribute the cause of a crisis to a specific bad actor offer a villain to blame, someone to potentially hold responsible,” DiResta said. “The conspiracy theories that are the most effective and plausible are usually based on some grain of truth and connect to some existing set of beliefs about the world.”

    For example, someone who distrusts the government may be more inclined to believe someone who posts negatively about a government agency.

    Conspiracy theorists on varying platforms claim the fires, which killed at least 114 people earlier this month, were planned as part of a strategic effort to weed out less wealthy residents on Maui and make room for multi-million dollar developments.

    In one video, a user claims a friend sent him a video of a laser beam “coming out of the sky, directly targeting the city.” “This was a direct energy weapon assault,” he said. The video remains posted but now includes a label from Instagram listing it as “false information.” The imagery appears to be from a previous SpaceX launch in California.

    Related far-fetched theories say the alleged “laser beams” were programmed not to hit anything blue, explaining why so many blue beach umbrellas were left unscathed by the fires.

    Other social media users allege elite Maui residents were behind the fires so they could buy the destroyed land at a discounted price and rebuild potentially a “smart city.”

    “You’re telling me that these cheaper lower middle class houses burnt down directly across the street and all of the mansions are still standing?” one YouTube user posted, referencing aerial imagery taken of the destruction.

    One tweet about a celebrity purchasing hundreds of acres across Maui over the past few years has received more than 12 million views on X, the platform formerly known as Twitter.

    When a conspiracy theory gains traction online, others may chime in and offer explanations for details not discussed in the original post. Social media algorithms can amplify these theories based on user attention and interactions.

    “Social media is incredibly valuable in crisis events as people on the ground can report the facts directly, but that usefulness is tempered, and can be dangerous, if misleading claims proliferate particularly in the immediate aftermath,” DiResta said.

    Social media platforms like Instagram, TikTok and YouTube have taken steps to curb the spread of conspiracy theories and misinformation, but some videos can slip through the cracks. Many platforms use a mix of tech monitoring tools and human reviewers to enforce their community guidelines.

    Ahead of the publishing of this article, TikTok removed several conspiracy theory videos sent by CNN that were in violation of its community guidelines, which it characterizes as “inaccurate, misleading, or false content that may cause significant harm to individuals or society, regardless of intent” on the platform. A company spokesperson said more than 40,000 trust and safety professionals around the world review and moderate content at all hours of the day.

    Meanwhile, in a statement provided to CNN, YouTube spokesperson Elena Hernandez said the platform uses different sections, such as top news, developing news and a fact-check panel, to provide users with as much context and background information as possible on certain trending topics, and will remove content when necessary.

    “During major news events, such as the horrific fires in Hawaii, our systems are designed to raise up content from authoritative sources in search results and recommendations,” Hernandez said.

    Instagram also employs third-party fact-checkers to contact sources, check public data and work to verify images and videos on questionable content. They then rate and provide labels to the content in question, such as “false,” “altered” or “missing context,” to encourage viewers to think critically about what they’re about to see.

    As a result, those posts show up far less often in users’ feeds and repeat offenders can face varying risks, such as losing monetization on their pages.

    Social media platform X did not immediately respond to a request for comment.

    Michael Inouye, a principal analyst at market research firm ABI Research, said social media companies are in a challenging spot because they want to uphold freedom of speech, but do so in an environment where posts that receive the most shares and likes often rise to the top of user feeds. That means posts sharing conspiracy theories that spark fear and emotion may perform better in a crisis than those sharing straightforward, accurate information.

    “Ultimately, social media will have to decide if it wants to be a better news organization or remain this ‘open’ platform for expression that can run counter to the ethics and standards that is required by news reporting,” Inouye said. “The problem is, even if something isn’t labeled as ‘news,’ some will still interpret personal opinion as truth, which puts us back in the same position.”

    Source link

  • Netflix shutters its DVD rental business, marking the end of the red envelope era | CNN Business

    Netflix shutters its DVD rental business, marking the end of the red envelope era | CNN Business



    CNN
     — 

    Netflix will send out its last red envelope on Friday, marking an end to 25 years of mailing DVDs to members.

    The company announced earlier this year it is shutting down its DVD-by-mail service, 16 years after it gradually shifted its focus to streaming content online. Netflix will continue to accept returns of customers’ remaining DVDs until October 27.

    Introduced in 1998 when Netflix first launched, the DVD service promised an easier rental experience than having to drive to the nearest Blockbuster or Hollywood Video. The red envelopes, which have long been synonymous with Netflix itself, littered homes and dorm rooms across the country.

    Although the idea of receiving a DVD in the mail now may sound almost as outdated as dial-up internet, some longtime customers told CNN they continued to find value in the DVD option.

    Colin McEvoy, a father of two from Bethlehem, Pennsylvania and a self-described film fanatic, said he rushed through 40 movies in the last few weeks to get through the remainder of his queue before the service ends. McEvoy has remained faithful to Netflix’s DVD service so he can keep watching Bollywood and obscure independent films not often found on streaming services.

    “I was basically watching them as soon as I got them, and then returning the discs as quickly as possible to get as many as I could,” said McEvoy, who has been using Netflix’s DVD-by-mail service since 2001, just three years after it launched.

    “I remember I was in high school when I first signed up for it, and the concept was so novel I had to really convince my dad that it was a legit service and not some sort of Internet scam,” said McEvoy, who uses an old Xbox 360 to play his Netflix DVDs. “Now I have friends who’ve seen my red Netflix envelopes arrive in the mail, and either didn’t remember what they were or couldn’t believe that I still got the DVDs in the mail.”

    Some other Netflix users stood by its DVD service not only for the selection but for added perks. Brandon Cordy, a 41-year-old graphic designer from Atlanta, previously told CNN he stuck with DVDs because many digital rentals don’t come with special features or audio commentaries.

    There are other factors, too. Michael Inouye, an analyst at ABI Research, said some consumers may still not have access to reliable or fast enough broadband connections, or simply prefer physical media to digital, much in the way that some audio enthusiasts still purchase and collect CDs and records.

    For Netflix, however, the offering has made less sense in recent years. “Our goal has always been to provide the best service for our members, but as the DVD business continues to shrink, that’s going to become increasingly difficult,” co-CEO Ted Sarandos wrote in a blog post in April.

    Shutting down its DVD business could help Netflix better focus resources as it expands into new markets such as gaming as well as live and interactive content. Its DVD business has also declined significantly in recent years. In 2021, Netflix’s non-streaming revenue – mostly attributable to DVDs – amounted to 0.6% of its revenue, or just over $182 million.

    The cost to operate its DVD business may also be a factor, especially as Netflix rethinks expenses broadly amid heightened streaming competition and broader economic uncertainty. “Moving plastic discs around costs far more money than streaming digital bits,” said Eric Schmitt, senior director analyst at Gartner Research. “Removing and replacing damaged and lost inventory are also cost considerations.”

    Even before Netflix announced the news, some longtime subscribers said they could see the writing on the wall.

    “The inventory of available titles, while still vast, had been contracting some over the years with some movies that were once available no longer being so,” Cordy said. “Turnaround times to get a new movie or movies also started to take longer, so I knew it was only a matter of time. But I didn’t want it to end if I could help it.”

    Other DVD subscribers were hoping for a happy ending. Bill Rouhana, the CEO of Chicken Soup for the Soul Entertainment – which owns DVD rental service Redbox – told The Hollywood Reporter in April he hoped to purchase Netflix’s DVD business. “I’d like to buy it… I wish Netflix would sell me that business instead of shutting it down,” he said. Redbox remains popular despite the shift in streaming, but took a hit during the pandemic because of the lack of new movies and TV shows to fill the boxes.

    A Netflix spokesperson told CNN it has no plans to sell the DVD business and will be recycling the majority of its DVDs through third-party companies that specialize in recycling digital and electronic media. It will also donate some of its inventory to organizations focused on film and media.

    Netflix is also offering subscribers a “finale surprise” where they could opt-in to receive up to 10 DVDs selected at random from their queue.

    McEvoy, who already subscribes to Disney+, Hulu, the Criterion channel and Mubi, said he’s now testing out other services such as Eros (Indian cinema) and Viki (Korean and Chinese films) for harder-to-find content. Still, he said, he’s “sad” to see Netflix’s DVD service depart.

    “I absolutely would not have been able to find all of those movies [I’ve watched] if not for the Netflix DVD service,” he said.

    Source link

  • Parents urged to delete their kids’ social media accounts ahead of possible Israeli hostage videos | CNN Business

    Parents urged to delete their kids’ social media accounts ahead of possible Israeli hostage videos | CNN Business


    New York
    CNN
     — 

    Schools in Israel, the UK and the US are advising parents to delete their children’s social media apps over concerns that Hamas militants will broadcast or disseminate disturbing videos of hostages who have been seized in recent days.

    A Tel Aviv school’s parent’s association said it expects videos of hostages “begging for their lives” to surface on social media. In a message to parents, shared with CNN by a mother of children at a high school in Tel Aviv, the association asked parents to remove apps such as TikTok from their children’s phones.

    “We cannot allow our kids to watch this stuff. It is also difficult, furthermore – impossible – to contain all this content on social media,” according to the parent’s association. “Thank you for your understanding and cooperation.”

    Hamas has warned that it will post murders of hostages on social media if Israel targets people in Gaza without warning.

    There are additional concerns that terrorists will exploit social media algorithms to specifically target such videos to followers of Jewish or Israeli influencers in an effort to wage psychological warfare on Israelis and Jews and their supporters globally.

    During the onslaught on Saturday, armed Hamas militants poured over the heavily-fortified border into Israel and took as many as 150 hostages, including Israeli army officers, back to Gaza. The surprise attacks killed at least 1,200 people, according to the Israel Defense Forces, and injured thousands more.

    Since Israel began airstrikes on the Palestinian enclave Saturday, at least 1,055 people have been killed in Gaza, including hundreds of children, women, and entire families, according to the Palestinian health ministry. It said a further 5,184 have been injured, as of Wednesday.

    As the war wages on, some Jewish schools in the US are also asking parents not to share related videos or photos that may surface, and to prevent children – and themselves – from watching them. The schools are also advising community members to delete their social media apps during this time.

    “Together with other Jewish day schools, we are warning parents to disable social media apps such as Instagram, X, and Tiktok from their children’s phones,” the head of a school in New Jersey wrote in an email. “Graphic and often misleading information is flowing freely, augmenting the fears of our students. … Parents should discuss the dangers of these platforms and ask their children on a daily basis about what they are seeing, even if they have deleted the most unfiltered apps from their phones.”

    Another school in the UK said it asked students to delete their social media apps during a safety assembly.

    TikTok, Instagram and X – formerly known as Twitter – did not immediately respond to requests for comment on how they are combating the increase of videos being posted online and for comment on schools asking parents to delete these apps.

    But X said on its platform is has experienced an increase in daily active users in the conflict area and its escalation teams have “actioned tens of thousands of posts for sharing graphic media, violent speech, and hateful conduct.” It did not respond to a request to comment further or define “actioned.”

    “We’re also continuing to proactively monitor for antisemitic speech as part of all our efforts,” X’s safety team said. “Plus we’ve taken action to remove several hundred accounts attempting to manipulate trending topics.”

    The company added it remains “laser focused” on enforcing the site’s rules and reminded users they can limit sensitive media they may encounter by visiting the “Content you see” option in Settings.

    Still, misinformation continues to run rampant on social media platforms, including X.

    A post viewed more than 500,000 times – featuring the hashtag #PalestineUnderAttack – claimed to show an airplane being shot down. But the clip was from the video game Arma 3, as was later noted in a “community note” appended to the post.

    Another video that is purported to show Israeli generals after being captured by Hamas fighters was viewed more than 1.7 million times by Monday. The video, however, instead shows the detention of separatists in Azerbaijan.

    On Tuesday, the European Union warned Elon Musk of “penalties” for disinformation circulating on X amid Israel-Hamas war.

    The EU also informed Meta CEO Zuckerberg on Wednesday of a disinformation surge on its platforms – which include Facebook – and demanded the company respond in 24 hours with how it plans to combat the issue.

    In an Instagram story on Tuesday, Zuckerberg called the attack “pure evil” and said his focus “remains on the safety of our employees and their families in Israel and the region.”

    Source link

  • ‘It gave us some way to fight back’: New tools aim to protect art and images from AI’s grasp | CNN Business

    ‘It gave us some way to fight back’: New tools aim to protect art and images from AI’s grasp | CNN Business



    CNN
     — 

    For months, Eveline Fröhlich, a visual artist based in Stuttgart, Germany, has been feeling “helpless” as she watched the rise of new artificial intelligence tools that threaten to put human artists out of work.

    Adding insult to injury is the fact that many of these AI models have been trained off of the work of human artists by quietly scraping images of their artwork from the internet without consent or compensation.

    “It all felt very doom and gloomy for me,” said Fröhlich, who makes a living selling prints and illustrating book and album covers.

    “We’ve never been asked if we’re okay with our pictures being used, ever,” she added. “It was just like, ‘This is mine now, it’s on the internet, I’m going to get to use it.’ Which is ridiculous.”

    Recently, however, she learned about a tool dubbed Glaze that was developed by computer scientists at the University of Chicago and thwarts the attempts of AI models to perceive a work of art via pixel-level tweaks that are largely imperceptible to the human eye.

    “It gave us some way to fight back,” Fröhlich told CNN of Glaze’s public release. “Up until that point, many of us felt so helpless with this situation, because there wasn’t really a good way to keep ourselves safe from it, so that was really the first thing that made me personally aware that: Yes, there is a point in pushing back.”

    Fröhlich is one of a growing number of artists that is fighting back against AI’s overreach and trying to find ways to protect her images online as a new spate of tools has made it easier than ever for people to manipulate images in ways that can sow chaos or upend the livelihoods of artists.

    These powerful new tools allow users to create convincing images in just seconds by inputting simple prompts and letting generative AI do the rest. A user, for example, can ask an AI tool to create a photo of the Pope dripped out in a Balenciaga jacket — and go on to fool the internet before the truth comes out that the image is fake. Generative AI technology has also wowed users with its ability to spit out works of art in the style of a specific artist. You can, for example, create a portrait of your cat that looks like it was done with the bold brushstrokes of Vincent Van Gogh.

    But these tools also make it very easy for bad actors to steal images from your social media accounts and turn them into something they’re not (in the worst cases, this could manifest as deepfake porn that uses your likeness without your consent). And for visual artists, these tools threaten to put them out of work as AI models learn how to mimic their unique styles and generate works of art without them.

    Some researchers, however, are now fighting back and developing new ways to protect people’s photos and images from AI’s grasp.

    Ben Zhao, a professor of computer science at University of Chicago and one of the lead researchers on the Glaze project, told CNN that the tool aims to protect artists from having their unique works used to train AI models.

    Glaze uses machine-learning algorithms to essentially put an invisible cloak on artworks that will thwart AI models’ attempts to understand the images. For example, an artist can upload an image of their own oil painting that has been run through Glaze. AI models might read that painting as something like a charcoal drawing — even if humans can clearly tell that it is an oil painting.

    Artists can now take a digital image of their artwork, run it through Glaze, “and afterwards be confident that this piece of artwork will now look dramatically different to an AI model than it does to a human,” Zhao told CNN.

    Zhao’s team released the first prototype of Glaze in March and has already surpassed a million downloads of the tool, he told CNN. Just last week, his team released a free online version of the tool as well.

    Jon Lam, an artist based in California, told CNN that he now uses Glaze for all of the images of his artwork that he shares online.

    Lam said that artists like himself have for years posted the highest resolution of their works on the internet as a point of pride. “We want everyone to see how awesome it is and see all the details,” he said. But they had no idea that their works could be gobbled up by AI models that then copy their styles and put them out of work.

    Jon Lam is a visual artist from California who uses the Glaze tool to help protect his artwork online from being used to train AI models.

    “We know that people are taking our high-resolution work and they are feeding it into machines that are competing in the same space that we are working in,” he told CNN. “So now we have to be a little bit more cautious and start thinking about ways to protect ourselves.”

    While Glaze can help ameliorate some of the issues artists are facing for now, Lam says it’s not enough and there needs to be regulation set regarding how tech companies can take data from the internet for AI training.

    “Right now, we’re seeing artists kind of being the canary in the coal mine,” Lam said. “But it’s really going to affect every industry.”

    And Zhao, the computer scientist, agrees.

    Since releasing Glaze, the amount of outreach his team has received from artists in other disciplines has been “overwhelming,” he said. Voice actors, fiction writers, musicians, journalists and beyond have all reached out to his team, Zhao said, inquiring about a version of Glaze for their field.

    “Entire, multiple, human creative industries are under threat to be replaced by automated machines,” he said.

    While the rise of AI images are threatening the jobs of artists around the world, everyday internet users are also at risk of their photos being manipulated by AI in other ways.

    “We are in the era of deepfakes,” Hadi Salman, a researcher at the Massachusetts Institute of Technology, told CNN amid the proliferation of AI tools. “Anyone can now manipulate images and videos to make people actually do something that they are not doing.”

    Salman and his team at MIT released a research paper last week that unveiled another tool aimed at protecting images from AI. The prototype, dubbed PhotoGuard, puts an invisible “immunization” over images that stops AI models from being able to manipulate the picture.

    The aim of PhotoGuard is to protect photos that people upload online from “malicious manipulation by AI models,” Salman said.

    Salman explained that PhotoGuard works by adjusting an image’s pixels in a way that is imperceptible to humans.

    In this demonstration released by MIT, a researcher shows a selfie (left) he took with comedian Trevor Noah. The middle photo, an AI-generated fake image, shows how the image looks after he used an AI model to generate a realistic edit of the pair wearing suits. The right image depicts how the researchers' tool, PhotoGuard, would prevent an attempt by AI models from editing the photo.

    “But this imperceptible change is strong enough and it’s carefully crafted such that it actually breaks any attempts to manipulate this image by these AI models,” he added.

    This means that if someone tries to edit the photo with AI models after it’s been immunized by PhotoGuard, the results will be “not realistic at all,” according to Salman.

    In an example he shared with CNN, Salman showed a selfie he took with comedian Trevor Noah. Using an AI tool, Salman was able to edit the photo to convincingly make it look like he and Noah were actually wearing suits and ties in the picture. But when he tries to make the same edits to a photo that has been immunized by PhotoGuard, the resulting image depicts Salman and Noah’s floating heads on an array of gray pixels.

    PhotoGuard is still a prototype, Salman notes, and there are ways people can try to work around the immunization via various tricks. But he said he hopes that with more engineering efforts, the prototype can be turned into a larger product that can be used to protect images.

    While generative AI tools “allow us to do amazing stuff, it comes with huge risks,” Salman said. It’s good people are becoming more aware of these risks, he added, but it’s also important to take action to address them.

    Not doing anything, “Might actually lead to much more serious things than we imagine right now,” he said.

    Source link