ReportWire

Tag: Meta Platforms Inc

  • X is ‘close to breakeven’ says CEO Linda Yaccarino | CNN Business

    X is ‘close to breakeven’ says CEO Linda Yaccarino | CNN Business

    [ad_1]


    New York
    CNN
     — 

    X CEO Linda Yaccarino, leader of the platform formerly known as Twitter, said the company is keeping an eye on new competitor Threads, despite the sharply slowing growth of the rival app from Meta.

    “Threads did jump in with a ton of hype and a launch pad from their Instagram users … [but] it’s dropped off dramatically,” Yaccarino told CNBC Thursday in her first interview as CEO of the company now called X.

    “But you can never, ever take your eye off any competition because they’ll continue iterating and as much as the launch has stalled, we’re keeping an eye on everything that they’re doing.”

    Still, Yaccarino said X remains largely focused on its own future as the company chases profitability, and that Threads may be looking at its past.

    “What we can see is that [Threads] may be building to what Twitter was — enter rebrand, enter X — and we’re focused on what X will be, and it’s an entirely different roadmap and vision,” she said.

    Staving off competition from Meta’s Threads and other rival platforms is just one of the things Yaccarino is now tasked with after taking over from owner Elon Musk as X’s CEO in June. In just her first two months, the company underwent a massive rebrand from Twitter to X in hopes of transforming into an “everything app” similar to China’s WeChat, and has continued to warn of challenges reviving its core advertising business. Musk, who is now the company’s chief technology officer, has also been preparing for a cage fight with Meta CEO Mark Zuckerberg.

    Yaccarino joined the company after months of turmoil caused by Musk’s takeover, including mass layoffs, controversial policy decisions and various legal battles.

    But on Thursday, she doubled down on the company’s vision and explained why it retired its highly recognized brand name.

    “The rebrand really represented a liberation from Twitter, a liberation that allows us to evolve past a legacy mindset and to reimagine how everyone … around the world is going to change how we congregate, how we transact, all in one place,” Yaccarino said, adding that users would soon be able to make video calls and payments through the platform.

    “It’s developing into this global town square that is fueled by free expression, where the public gathers in real time,” she said.

    Yaccarino said that the company is returning to growth mode after months of slashing costs through ongoing layoffs, infrastructure and office space reductions and, in some cases, allegedly holding back on paying its bills and employee severance. Twitter’s staff has shrunk from nearly 8,000 employees to just around 1,500 workers since Musk’s takeover, Yaccarino said.

    “Are we hiring? Yes,” Yaccarino said. “I get to come in and shift from this cost discipline to growth … the future is bright.”

    Threatening to stand in the way of that evolution are the company’s very real business challenges. Musk last month disclosed in a post that, due to a 50% drop in advertising revenue and a “heavy debt load,” the platform is still losing money. After Musk bought Twitter for $44 billion last October, the company’s value now stands around $15 billion, according to a May disclosure from a Fidelity fund.

    Yaccarino, a former marketing executive with NBCUniversal, was brought on to Twitter in part to help revive its advertising business. And she said on Thursday that the company is “close to breakeven.”

    “Coca Cola, Visa, State Farm is a huge partner, they’re coming back — the last bunch of weeks, continued revenue growth,” Yaccarino said.

    But maintaining the ad business has been an uphill battle for the site since Musk’s takeover. Hordes of advertisers halted spending on the platform over concerns about content moderation, mass layoffs and general uncertainty about the company’s future. Musk has also defended his own controversial tweets, telling CNBC in May, “I’ll say what I want, and if the consequence of that is losing money, so be it.”

    Yaccarino pointed to the company’s “freedom of speech, not freedom of reach” policy that aims to limit the reach of so-called lawful but awful content on the platform and to protect brands from having their ads appear alongside such content. X on Tuesday rolled out additional brand safety controls for advertisers, including the ability to avoid having their ads show next to “targeted hate speech, sexual content, gratuitous gore, excessive profanity, obscenity, spam, drugs.”

    “I wrap my security blanket around you, my brand and my CMO, and say your ads will only air next to content that is appropriate for you,” Yaccarino said Thursday.

    [ad_2]

    Source link

  • Mark Zuckerberg says ‘it’s time to move on’ from Elon Musk cage fight | CNN Business

    Mark Zuckerberg says ‘it’s time to move on’ from Elon Musk cage fight | CNN Business

    [ad_1]



    CNN
     — 

    Mark Zuckerberg says Elon Musk “isn’t serious” about a cage fight and “it’s time to move on” from their proposed showdown, the details of which were never nailed down.

    “Elon won’t confirm a date, then says he needs surgery, and now asks to do a practice round in my backyard instead,” the Meta chief executive wrote on social platform Threads Sunday.

    “If Elon ever gets serious about a real date and official event, he knows how to reach me. Otherwise, time to move on. I’m going to focus on competing with people who take the sport seriously.”

    Zuckerberg, 39, had previously proposed August 26 for the fight, but said Musk, 52, hadn’t confirmed.

    Last week, Musk wrote that the possible showdown would be streamed on X, formerly known as Twitter, which he owns. Musk said he was “lifting weights throughout the day, preparing for the fight,” adding “all proceeds will go to charity for veterans.”

    Zuckerberg, a practitioner of Brazilian jiu-jitsu, won gold and silver in two featherweight white belt categories at a California martial arts tournament in May.

    In June, the two tech billionaires seemingly agreed to face each other in a cage fight. The stakes for the potential clash were raised last month when Meta launched Threads, seen as a direct competitor to Twitter.

    On July 10, a few days after launch, Zuckerberg said more than 100 million people had signed up for the platform, making it one of the fastest-growing apps in history.

    But weeks later, industry estimates showed that Threads was struggling to retain users and that engagement had fallen to new lows.

    [ad_2]

    Source link

  • How to block graphic social media posts on your kids’ phones | CNN Business

    How to block graphic social media posts on your kids’ phones | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Many schools, psychologists and safety groups are urging parents to disable their children’s social media apps over mounting concerns that Hamas plans to disseminate graphic videos of hostages captured in the Israel-Gaza war.

    Disabling an app or implementing restrictions, such as filtering out certain words and phrases, on young users’ phones may be sound like a daunting process. But platforms and mobile operating systems offer safeguards that could go along way in protecting a child’s mental health.

    Following the attacks on Israel last weekend, much of the terror has played out on social media. Videos of hostages taken on the streets and civilians left wounded continue to circulate on varying platforms. Although some companies have pledged to restrict sensitive videos, many are still being shared online.

    That can be particularly stressful for minors. The American Psychological Association recently issued a warning about the psychological impacts of the ongoing violence in Israel and Gaza, and other research has linked exposure to violence on social media and in the news as a “cycle of harm to mental health.”

    Alexandra Hamlet, a clinical psychologist in New York City, told CNN people who are caught off guard by seeing certain upsetting content are more likely to feel worse than individuals who choose to engage with content that could be upsetting to them. That’s particularly true for children, she said.

    “They are less likely to have the emotional control to turn off content that they find triggering than the average adult, their insight and emotional intelligence capacity to make sense of what they are seeing is not fully formed, and their communication skills to express what they have seen and how to make sense of it is limited comparative to adults,” Hamlet said.

    If deleting an app isn’t an option, here are other ways to restrict or closely monitor a child’s social media use:

    Parents can start by visiting the parental control features found on their child phone’s mobile operating system. iOS’ Screen Time tool and Android’s Google Family Link app help parents manage a child’s phone activity and can restrict access to certain apps. From there, various controls can be selected, such as restricting app access or flagging inappropriate content.

    Guardians can also set up guardrails directly within social media apps.

    TikTok: TikTok, for example, offers a Family Pairing feature that allows parents and guardians to link their own TikTok account to their child’s account and restrict their ability to search for content, limit content that may not be appropriate for them or filter out videos with words or hashtags from showing up in feeds. These features can also be enabled within the settings of the app, without needing to sync up a guardian’s account.

    Facebook, Instagram and Threads: Meta, which owns Facebook, Instagram and threads, has an educational hub for parents with resources, tips and articles from experts on user safety, and a tool that allows guardians to see how much time their kids spend on Instagram and set time limits, which some experts advise should be considered during this time.

    YouTube: On YouTube, the Family Link tool allows parents to set up supervised accounts for their children, screen time limits or block certain content. At the same time,YouTube Kids also provides a safer space for kids, and parents who decide their kids are ready to see more content on YouTube can create a supervised account. In addition, autoplay is turned off by default for anyone under 18 but can be turned off anytime in Settings for all users.

    Hamlet said families should consider creating a family policy where family members agree to delete their apps for a certain period of time.

    “It could be helpful to frame the idea as an experiment, where everyone is encouraged to share how not having the apps has made them feel over the course of time,” she said. “It is possible that after a few days of taking a break from social media, users may report feeling less anxious and overwhelmed, which could result in a family vote of continuing to keep the apps deleted for a few more days before checking in again.”

    If there’s resistance, Hamlet said should try to reduce the time spent on apps right now and come up with an agreed upon number of minutes each day for usage.

    “Parents could ideally include a contingency where in exchange for allowing the child to use their apps for a certain number of minutes, their child must agree to having a short check in to discuss whether there was any harmful content that the child had exposure to that day,” she said. “This exchange allows both parents to have a protected space to provide effective communication and support, and to model openness and care for their child.”

    TikTok: A TikTok spokesperson, which said the platform uses technology and 40,000 safety professionals to moderate the platform, told CNN it is taking the situation seriously and has increased dedicated resources to help prevent violent, hateful, or misleading content on the platform.

    Meta: Meta similarly said it has set up a special operations center staffed with experts, including fluent Hebrew and Arabic speakers, to monitor and respond to the situation. “Our teams are working around the clock to keep our platforms safe, take action on content that violates our policies or local law, and coordinate with third-party fact checkers in the region to limit the spread of misinformation,” Meta said in a statement. “We’ll continue this work as this conflict unfolds.”

    YouTube: Google-owned YouTube said it is providing thousands of age-restricted videos that do not violate its policies – some of these, however, are not appropriate for viewers under 18. (This may include bystander footage). The company told CNN it has “removed thousands of harmful videos” and its teams “remain vigilant to take action quickly across YouTube, including videos, Shorts and livestreams.”

    [ad_2]

    Source link

  • Threads user count falls to new lows, highlighting retention challenges | CNN Business

    Threads user count falls to new lows, highlighting retention challenges | CNN Business

    [ad_1]


    Washington, DC
    CNN
     — 

    Threads, Meta’s Twitter rival, is struggling to retain users roughly a month after its highly publicized launch, according to fresh industry estimates showing that app engagement has fallen to new lows.

    The data from market research firms Similarweb and Sensor Tower highlight the challenges facing Meta as it seeks to exploit the opening created by the chaos surrounding Twitter’s management.

    Threads’ daily active user count is down 82% from launch as of July 31, according to Sensor Tower, with just eight million users accessing the app each day. That is the lowest it has been since the day after the app’s release when daily active users peaked at roughly 44 million, Sensor Tower said.

    People are also opening the app less frequently and spending less time there, Sensor Tower added.

    On its launch day, Threads users opened the app an average of 14 times and spent an average of 19 minutes scrolling through it, the company reported. By the end of the month, however, those figures had fallen sharply.

    As of August 1, Threads’ daily average time spent fell to just 2.9 minutes a day, and people spent only 2.6 sessions per day using the app, said Abe Yousef, a senior insights analyst at Sensor Tower.

    Findings from Similarweb showed the same pattern of decline. Threads’ user count peaked at roughly 49 million on July 7, the day after launch, and fell steadily to just over 11 million by July 29, said David Carr, a senior insights manager at Similarweb.

    The steepest drop-off occurred in the two weeks immediately following Threads’ launch. But the new data show how the decline has continued and is ongoing.

    According to Sensor Tower, Threads’ daily active user count is still falling at a rate of roughly 1% per day.

    Speaking on the company’s earnings call last month, Meta CEO Mark Zuckerberg said he was “quite optimistic” about the app.

    “We saw unprecedented growth out of the gate and more importantly we’re seeing more people coming back daily than I’d expected,” he said. “And now, we’re focused on retention and improving the basics. And then after that, we’ll focus on growing the community to the scale we think is possible.”

    Threads launched with only a handful of features and later promised to add in highly requested tools like a reverse-chronological content feed, a desktop version of the app and direct messages.

    On July 10, Zuckerberg announced that more than 100 million people had signed up for Threads, making it one of the fastest-growing apps in history. The company has reportedly looked into adding “retention-driving hooks” that can keep users engaged.

    [ad_2]

    Source link

  • X appears to slow load times for links to several news outlets and rival platforms | CNN Business

    X appears to slow load times for links to several news outlets and rival platforms | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Link loading times to some Twitter competitors and news media sites posted to X, the social media platform formerly known as Twitter, appeared to be delayed or throttled for much of Tuesday.

    Links posted to X that directed to sites including the New York Times, Reuters, Facebook, Substack and X competitors Bluesky and Threads took around 5 seconds to load — a notable slowdown from the typically nearly instantaneous loading times, according to observations by CNN reporters. Many other sites, such as NBA.com, CNN, retailer Target and other sites did not appear to be affected by the issue.

    The delays were first reported by users of the technology forum Hacker News.

    The reason for the delays in loading links to some sites was not clear. X did not respond to multiple requests for comment from CNN. The site has been plagued by technical issues after Musk bought the site last year and laid off the majority of the staff. And the issue seemed to have resolved for some users by Tuesday afternoon.

    However, the delays affected the sites for rival platforms, as well as news outlets that Twitter owner Elon Musk has previously criticized. Musk earlier this year feuded with the New York Times over its unwillingness to pay for his platform’s new paid verification program, and he has separately called for the outlet to be “cancelled.”

    The apparent delay in visiting links to the New York Times was easy to verify with simple commands on a computer. Will Dormann, a cybersecurity researcher, plugged the New York Times website into a basic command program on his Mac and compared the loading time for that website with that of a dummy website. The load time for the New York Times site was about 4.5 seconds longer, Dormann told CNN Tuesday.

    X, like other platforms, uses a link-shortener service to collect information on users who click on links shared on the platform. When a link for a New York Times article plugged into X’s link-shortener takes far longer to load than other websites using the same link-shortening service, “this is the clear indicator that there are server-side [at the X-operated shortener] shenanigans going on,” Dormann told CNN.

    The New York Times said in a statement to CNN that it had observed the delay, but, “We have not received any explanation from the platform about this move.”

    “While we don’t know the rationale behind the application of this time delay, we would be concerned by targeted pressure applied to any news organization for unclear reasons,” it said in the statement. “The mission of The New York Times is to report the news impartially without fear or favor, and we’ll continue to do so, undeterred by any attempts to hinder this.”

    Meta, the parent company of Facebook and Threads, did not respond to a request for comment on the delay. But CEO Mark Zuckerberg responded to a post about the issue on Threads with a thinking face emoji.

    Musk and Zuckerberg have in recent weeks been making plans to take one another on in a cage fight, although Zuckerberg this week signaled that the fight may be off because he believes Musk “isn’t serious.” “Elon won’t confirm a date, then says he needs surgery, and now asks to do a practice round in my backyard instead,” Zuckerberg wrote on Threads Sunday. Musk on Monday appeared to respond by suggesting in a series of tweets that he might show up at Zuckerberg’s home to fight anyway.

    Substack cofounders Chris Best, Hamish McKenzie and Jairaj Sethi said in a statement to CNN that they hoped X would reverse the delay but that “Substack was created in direct response to this kind of behavior by social media companies.”

    “Writers cannot build sustainable businesses if their connection to their audience depends on unreliable platforms that have proven they are willing to make changes that are hostile to the people who use them,” the Substack cofounders said.

    Reuters said in a statement that it was aware of reports “of a delay in opening links to Reuters stories on X. We are looking into the matter.”

    Bluesky did not immediately respond to a request for comment about the link delay.

    X briefly sparked backlash in December over a decision to ban links to rival social media services, including Facebook, Instagram and Twitter alternatives like Mastodon, which was later reversed. The platform has also faced a series of outages and technical issues in recent months that have affected users’ ability to read tweets, view photos and click through links after Musk slashed the company’s staff and cut back on infrastructure spending.

    -CNN’s Jon Passantino and Oliver Darcy contributed to this report.

    [ad_2]

    Source link

  • The Israel-Hamas war reveals how social media sells you the illusion of reality | CNN Business

    The Israel-Hamas war reveals how social media sells you the illusion of reality | CNN Business

    [ad_1]


    New York
    CNN
     — 

    As the Israel-Hamas war reaches the end of its first week, millions have turned to platforms including TikTok and Instagram in hopes of comprehending the brutal conflict in real time. Trending search terms on TikTok in recent days illustrate the hunger for frontline perspectives: From “graphic Israel footage” to “live stream in Israel right now,” internet users are seeking out raw, unfiltered accounts of a crisis they are desperate to understand.

    For the most part, they are succeeding, discovering videos of tearful Israeli children wrestling with the permanence of death alongside images of dazed Gazans sitting in the rubble of their former homes. But that same demand for an intimate view of the war has created ample openings for disinformation peddlers, conspiracy theorists and propaganda artists — malign influences that regulators and researchers now warn pose a dangerous threat to public debates about the war.

    One recent TikTok video, seen by more than 300,000 users and reviewed by CNN, promoted conspiracy theories about the origins of the Hamas attacks, including false claims that they were orchestrated by the media. Another, viewed more than 100,000 times, shows a clip from the video game “Arma 3” with the caption, “The war of Israel.” (Some users in the comments of that video noted they had seen the footage circulating before — when Russia invaded Ukraine.)

    TikTok is hardly alone. One post on X, formerly Twitter, was viewed more than 20,000 times and flagged as misleading by London-based social media watchdog Reset for purporting to show Israelis staging civilian deaths for cameras. Another X post the group flagged, viewed 55,000 times, was an antisemitic meme featuring Pepe the Frog, a cartoon that has been appropriated by far-right white supremacists. On Instagram, a widely shared and viewed video of parachuters dropping in on a crowd and captioned “imagine attending a music festival when Hamas parachutes in” was debunked over the weekend and, in fact, showed unrelated parachute jumpers in Egypt. (Instagram later labeled the video as false.)

    This week, European Union officials sent warnings to TikTok, Facebook and Instagram-parent Meta, YouTube and X, highlighting reports of misleading or illegal content about the war on their platforms and reminding the social media companies they could face billions of dollars in fines if an investigation later determines they violated EU content moderation laws. US and UK lawmakers have also called on those platforms to ensure they are enforcing their rules against hateful and illegal content.

    Since the violence in Israel began, Imran Ahmed, founder and CEO of the social media watchdog group Center for Countering Digital Hate, told CNN his group has tracked a spike in efforts to pollute the information ecosystem surrounding the conflict.

    “Getting information from social media is likely to lead to you being severely disinformed,” said Ahmed.

    Everyone from US foreign adversaries to domestic extremists to internet trolls and “engagement farmers” has been exploiting the war on social media for their own personal or political gain, he added.

    “Bad actors surrounding us have been manipulating, confusing and trying to create deception on social media platforms,” Dan Brahmy, CEO of the Israeli social media threat intelligence firm Cyabra, said Thursday in a video posted to LinkedIn. “If you are not sure of the trustworthiness [of content] … do not share,” he said.

    ‘Upticks in Islamophobic and antisemitic narratives’

    Graham Brookie, senior director of the Digital Forensic Research Lab at the Atlantic Council in Washington, DC, told CNN his team has witnessed a similar phenomenon. The trend includes a wave of first-party terrorist propaganda, content depicting graphic violence, misleading and outright false claims, and hate speech – particularly “upticks in specific and general Islamophobic and antisemitic narratives.”

    Much of the most extreme content, he said, has been circulating on Telegram, the messaging app with few content moderation controls and a format that facilitates quick and efficient distribution of propaganda or graphic material to a large, dedicated audience. But in much the same way that TikTok videos are frequently copied and rebroadcast on other platforms, content shared on Telegram and other more fringe sites can easily find a pipeline onto mainstream social media or draw in curious users from major sites. (Telegram didn’t respond to a request for comment.)

    Schools in Israel, the United Kingdom and the United States this week urged parents to delete their children’s social media apps over concerns that Hamas will broadcast or disseminate disturbing videos of hostages who have been seized in recent days. Photos of dead or bloodied bodies, including those of children, have already spread across Facebook, Instagram, TikTok and X this week.

    And tech watchdog group Campaign for Accountability on Thursday released a report identifying several accounts on X sharing apparent propaganda videos with Hamas iconography or linking to official Hamas websites. Earlier in the week, X faced criticism for videos unrelated to the war being presented as on-the-ground footage and for a post from owner Elon Musk directing users to follow accounts that previously shared misinformation (Musk’s post was later deleted, and the videos were labeled using X’s “community notes” feature.)

    Some platforms are in a better position to combat these threats than others. Widespread layoffs across the tech industry, including at some social media companies’ ethics and safety teams, risk leaving the platforms less prepared at a critical moment, misinformation experts say. Much of the content related to the war is also spreading in Arabic and Hebrew, testing the platforms’ capacity to moderate non-English content, where enforcement has historically been less robust than in English-language content.

    “Of course, platforms have improved over the years. Communication & info sharing mechanisms exist that did not in years past. But they have also never been tested like this,” Brian Fishman, the co-founder of trust and safety platform Cinder who formerly led Facebook’s counterterrorism efforts, said Wednesday in a post on Threads. “Platforms that kept strong teams in place will be pushed to the limit; platforms that did not will be pushed past it.”

    Linda Yaccarino, the CEO of X, said in a letter Wednesday to the European Commission that the platform has “identified and removed hundreds of Hamas-related accounts” and is working with several third-party groups to prevent terrorist content from spreading. “We’ve diligently taken proactive actions to remove content that violates our policies, including: violent speech, manipulated media and graphic media,” she said. The European Commission on Thursday formally opened an investigation into X following its earlier warning about disinformation and illegal content linked to the war.

    Meta spokesperson Andy Stone said that since Hamas’ initial attacks, the company has established “a special operations center staffed with experts, including fluent Hebrew and Arabic speakers, to closely monitor and respond to this rapidly evolving situation. Our teams are working around the clock to keep our platforms safe, take action on content that violates our policies or local law, and coordinate with third-party fact checkers in the region to limit the spread of misinformation. We’ll continue this work as this conflict unfolds.”

    YouTube, for its part, says its teams have removed thousands of videos since the attack began, and continues to monitor for hate speech, extremism, graphic imagery and other content that violates its policies. The platform is also surfacing almost entirely videos from mainstream news organizations in searches related to the war.

    Snapchat told CNN that its misinformation team is closely watching content coming out of the region, making sure it is within the platform’s community guidelines, which prohibits misinformation, hate speech, terrorism, graphic violence and extremism.

    TikTok did not respond to a request for comment on this story.

    Large tech platforms are now subject to content-related regulation under a new EU law called the Digital Services Act, which requires them to prevent the spread of mis- and disinformation, address rabbit holes of algorithmically recommended content and avoid possible harms to user mental health. But in such a contentious moment, platforms that take too heavy a hand in moderation could risk backlash and accusations of bias from users.

    Platforms’ algorithms and business models — which generally rely on the promotion of content most likely to garner significant engagement — can aid bad actors who design content to capitalize on that structure, Ahmed said. Other product choices, such as X’s moves to allow any user to pay for a subscription for a blue “verification” checkmark that grants an algorithmic boost to post visibility, and to remove the headlines from links to news articles, can further manipulate how users perceive a news event.

    “It’s time to break the emergency glass,” Ahmed said, calling on platforms to “switch off the engagement-driven algorithms.” He added: “Disinformation factories are going to cause geopolitical instability and put Jews and Muslims at harm in the coming weeks.”

    Even as social media companies work to hide the absolute worst content from their users — whether out of a commitment to regulation, advertisers’ brand safety concerns, or their own editorial judgments — users’ continued appetite for gritty, close-up dispatches from Israelis and Palestinians on the ground is forcing platforms to walk a fine line.

    “Platforms are caught in this demand dynamic where users want the latest and the most granular, or the most ‘real’ content or information about events, including terrorist attacks,” Brookie said.

    The dynamic simultaneously highlights the business models of social media and the role the companies play in carefully calibrating their users’ experiences. The very algorithms that are widely criticized elsewhere for serving up the most outrageous, polarizing and inflammatory content are now the same ones that, in this situation, appear to be giving users exactly what they want.

    But closeness to a situation is not the same thing as authenticity or objectivity, Ahmed and Brookie said, and the wave of misinformation flooding social media right now underscores the dangers of conflating them.

    Despite giving the impression of reality and truthfulness, Brookie said, individual stories and combat footage conveyed through social media often lack the broader perspective and context that journalists, research organizations and even social media moderation teams apply to a situation to help achieve a fuller understanding of it.

    “It’s my opinion that users can interact with the world as it is — and understand the latest, most accurate information from any given event — without having to wade through, on an individual basis, all of the worst possible content about that event,” Brookie said.

    Potentially exacerbating the messy information ecosystem is a culture on social media platforms that often encourages users to bear witness to and share information about the crisis as a way of signaling their personal stance, whether or not they are deeply informed. That can lead even well-intentioned users to unwittingly share misleading information or highly emotional content created with the intention of collecting views or monetizing highly engaging content.

    “Be very cautious about sharing in the middle of a major world event,” Ahmed said. “There are people trying to get you to share bullsh*t, lies, which are designed to inculcate you to hate or to misinform you. And so sharing stuff that you’re not sure about is not helping people, it’s actually really harming them and it contributes to an overall sense that no one can trust what they’re seeing.”

    [ad_2]

    Source link

  • Meta criticized for making reproductive health an R-rated issue | CNN Business

    Meta criticized for making reproductive health an R-rated issue | CNN Business

    [ad_1]



    CNN
     — 

    Female reproductive health experts are calling on Meta, the parent company of Facebook and Instagram, to rethink its restrictions on reproductive health content.

    The company has long faced criticism for removing and restricting female reproductive health information with a prominent report from the Center for Intimacy Justice early last year accusing Meta of systematically rejecting many female and gender diverse reproductive health ads. The CIJ report also accused Meta of having bias algorithms, stating that male reproductive health ads were found to be permitted, including ads that referenced male sexual pleasure.

    In bid to combat those concerns, Meta tweaked its “adult products or services” advertising policy last October to include clearer guidelines about reproductive health, clarifying that it allows the promotion of “reproductive health products or services” if the content is targeted to “people aged 18 or older.”

    Meta

    (FB)
    argues the topic is sensitive, stating that as a global company it needs to take in to account the “wide array of people from different cultures and countries” to “avoid potential negative experiences.”

    However, female reproductive experts tell CNN that the advertising policy is still too restrictive and is creating barriers for how younger people around the world access information about female reproductive health issues, including the menstrual cycle, which can start as early as 8 years old.

    They argue that censoring content about normal and natural bodily functions plays into the shame that has long plagued how people learn about the female body and hormone cycle. That can hinder how people with uteruses advocate for their bodies in healthcare settings, including obtaining care for misunderstood and underdiagnosed conditions like endometriosis.

    The practice of censoring female reproductive health content is not unique to Meta, with similar issues reported on other social media platforms. However, Meta is under specific scrutiny for failing to adequately address the issue within its policy updates last year.

    The founder and CEO of the Center for Intimacy Justice, Jackie Rotman, told CNN that despite the policy update, Meta’s algorithms still seem to have a problem with female reproductive health content.

    “The policy says that reproductive health is allowed, but in practice their technology is still rejecting it,” Rotman said, explaining that images of uteruses are often mistakenly flagged as nudity, and words like period, menopause, endometriosis and vagina also commonly triggering sexually inappropriate warnings.

    Rotman outlined that while Meta’s reproductive health guidelines are targeted toward advertising content, unpaid posts are also often being impacted by Meta’s algorithms. She says shadow-banning, which refers to content being partially blocked from certain audiences, is common practice for organic content. Several reproductive health content creators told CNN that they experience shadow-banning, explaining that it is time consuming game of trial and error to determine what is considered too taboo.

    Dr. Hazel Wallace, author of “The Female Factor” told CNN she wishes she could be more direct in how she speaks about the female body and hormone cycle, including menstrual health. However, said has learned that “to educate people, you almost have to play the game.”

    She says she often experiences shadow banning, with her analytics showing less engagement if she uses words like period. She explained that her team experimented with Meta’s algorithm, finding they could often dodge restrictions by mis-spelling the word period as p3riod.

    “We found that it increased engagement because it doesn’t flag your content as being inappropriate to certain audiences,” Wallace outlined.

    While Meta on several occasions has apologized and re-instated female reproductive health content that it says was mistakenly removed, it still stipulates an age restriction in its policy. Therefore, even if the updated policy was perfectly implemented, Meta would still be green lighting the practise of censoring crucial content from certain audiences.

    CNN asked Meta about the reports that it is continuing to remove, restrict, and shadow-ban female reproductive health content. CNN also asked Meta why all female reproductive health, including menstrual health, is classified as an 18+ issue.

    In response, a spokesperson for Meta, Ryan Daniels, said, “We welcome ads for women’s health and sexual wellness products, but we prohibit nudity and have specific rules about how these products can be marketed on our platform.”

    In a bid to change the conversation, female reproductive health content creators are not letting Meta’s restrictions silence their voices.

    Wallace, a like so many others in her field, says she should not need to self-censor how she speaks about female reproductive health, arguing that censorship perpetuates a “hush hush” narrative about “normal experiences.”

    “Imagine a world where we are teaching young girls and women from puberty – this is what to expect, this is normal, this is not normal, this is when to ask for help. We would feel a lot more empowered,” Wallace stated.

    Categorizing reproductive health as an R-rated topic is an issue that extends far beyond Meta advertising policies, reflecting wider societal views, from politics to sex education curriculums.

    Tracey Lindeman, the author of “BLEED: Destroying Myths and Misogyny in Endometriosis,” says classifying all female reproductive health issues under the umbrella of sexual health “perpetuates the idea that our sexual organs are to be exploited and used for sexuality, even at a young age.”

    “You’re born with a reproductive system. Whether or not you’re having sex, you still have that system in your body, and it’s still affecting your body in different ways,” Lindeman reasoned.

    “How about we just teach people about how their bodies work first, before we start teaching them how they work to have sex,” Lindeman stated.

    [ad_2]

    Source link

  • Meta’s Threads is temporarily blocking searches about Covid-19 | CNN Business

    Meta’s Threads is temporarily blocking searches about Covid-19 | CNN Business

    [ad_1]



    CNN
     — 

    Threads, the much-hyped social media app from Facebook-parent Meta, is taking heat for blocking searches for “coronavirus,” “Covid,” and other pandemic-related queries.

    The tech giant’s decision to block coronavirus-related searches on its service comes as the United States deals with a recent uptick in Covid-19 hospitalizations, per CDC data, and more than three years into the global pandemic.

    News of Threads blocking searches related to the coronavirus was first reported by The Washington Post.

    A Meta spokesperson told CNN that the company just began rolling out keyword search for Threads to additional countries last week.

    “The search functionality temporarily doesn’t provide results for keywords that may show potentially sensitive content,” the statement added. “People will be able to search for keywords such as ‘COVID’ in future updates once we are confident in the quality of the results.” 

    As of Monday, searches on the Threads app conducted by CNN for “coronavirus,” “Covid” and “Covid-19” yielded a blank page with the text: “No results.” Searches for “vaccine” also prompted no results. Typing any of these queries into the Threads app does, however, offer a link directing users to the CDC’s website on Covid-19 or vaccinations, depending on the search.

    Meta did not disclose what other keyword searches currently yield no results.

    Meta’s Facebook and other social media platforms faced controversy in the early part of the pandemic for the apparent spread of Covid-19-related misinformation online.

    Meta officially launched Threads in early July, and the app quickly garnered more than 100 million sign-ups in its first week on the heels of months of chaos at Twitter, which is now known as X. But much of the buzz faded somewhat in the weeks that followed as users realized the bare-bones platform still lacked many of the features that made X popular with users.

    Threads released its much-requested web version late last month, and its keyword search about a week ago. But the current limitations around its search function highlights how the platform still has some kinks to work through before it can fully replace the real-time search and engagement experience that social media users have historically relied on with X.

    –CNN’s Clare Duffy contributed to this report.

    [ad_2]

    Source link

  • Parents urged to delete their kids’ social media accounts ahead of possible Israeli hostage videos | CNN Business

    Parents urged to delete their kids’ social media accounts ahead of possible Israeli hostage videos | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Schools in Israel, the UK and the US are advising parents to delete their children’s social media apps over concerns that Hamas militants will broadcast or disseminate disturbing videos of hostages who have been seized in recent days.

    A Tel Aviv school’s parent’s association said it expects videos of hostages “begging for their lives” to surface on social media. In a message to parents, shared with CNN by a mother of children at a high school in Tel Aviv, the association asked parents to remove apps such as TikTok from their children’s phones.

    “We cannot allow our kids to watch this stuff. It is also difficult, furthermore – impossible – to contain all this content on social media,” according to the parent’s association. “Thank you for your understanding and cooperation.”

    Hamas has warned that it will post murders of hostages on social media if Israel targets people in Gaza without warning.

    There are additional concerns that terrorists will exploit social media algorithms to specifically target such videos to followers of Jewish or Israeli influencers in an effort to wage psychological warfare on Israelis and Jews and their supporters globally.

    During the onslaught on Saturday, armed Hamas militants poured over the heavily-fortified border into Israel and took as many as 150 hostages, including Israeli army officers, back to Gaza. The surprise attacks killed at least 1,200 people, according to the Israel Defense Forces, and injured thousands more.

    Since Israel began airstrikes on the Palestinian enclave Saturday, at least 1,055 people have been killed in Gaza, including hundreds of children, women, and entire families, according to the Palestinian health ministry. It said a further 5,184 have been injured, as of Wednesday.

    As the war wages on, some Jewish schools in the US are also asking parents not to share related videos or photos that may surface, and to prevent children – and themselves – from watching them. The schools are also advising community members to delete their social media apps during this time.

    “Together with other Jewish day schools, we are warning parents to disable social media apps such as Instagram, X, and Tiktok from their children’s phones,” the head of a school in New Jersey wrote in an email. “Graphic and often misleading information is flowing freely, augmenting the fears of our students. … Parents should discuss the dangers of these platforms and ask their children on a daily basis about what they are seeing, even if they have deleted the most unfiltered apps from their phones.”

    Another school in the UK said it asked students to delete their social media apps during a safety assembly.

    TikTok, Instagram and X – formerly known as Twitter – did not immediately respond to requests for comment on how they are combating the increase of videos being posted online and for comment on schools asking parents to delete these apps.

    But X said on its platform is has experienced an increase in daily active users in the conflict area and its escalation teams have “actioned tens of thousands of posts for sharing graphic media, violent speech, and hateful conduct.” It did not respond to a request to comment further or define “actioned.”

    “We’re also continuing to proactively monitor for antisemitic speech as part of all our efforts,” X’s safety team said. “Plus we’ve taken action to remove several hundred accounts attempting to manipulate trending topics.”

    The company added it remains “laser focused” on enforcing the site’s rules and reminded users they can limit sensitive media they may encounter by visiting the “Content you see” option in Settings.

    Still, misinformation continues to run rampant on social media platforms, including X.

    A post viewed more than 500,000 times – featuring the hashtag #PalestineUnderAttack – claimed to show an airplane being shot down. But the clip was from the video game Arma 3, as was later noted in a “community note” appended to the post.

    Another video that is purported to show Israeli generals after being captured by Hamas fighters was viewed more than 1.7 million times by Monday. The video, however, instead shows the detention of separatists in Azerbaijan.

    On Tuesday, the European Union warned Elon Musk of “penalties” for disinformation circulating on X amid Israel-Hamas war.

    The EU also informed Meta CEO Zuckerberg on Wednesday of a disinformation surge on its platforms – which include Facebook – and demanded the company respond in 24 hours with how it plans to combat the issue.

    In an Instagram story on Tuesday, Zuckerberg called the attack “pure evil” and said his focus “remains on the safety of our employees and their families in Israel and the region.”

    [ad_2]

    Source link

  • Justin Trudeau blasts Facebook for blocking news as Canada’s wildfires rage | CNN Business

    Justin Trudeau blasts Facebook for blocking news as Canada’s wildfires rage | CNN Business

    [ad_1]



    CNN
     — 

    Canadian Prime Minister Justin Trudeau blasted Facebook for “putting corporate profits ahead of people’s safety” as the social media platform continues to block news content while wildfires rage in Canada’s Northwest Territories and British Columbia.

    “It is so inconceivable that a company like Facebook is choosing to put corporate profits ahead of ensuring that local news organizations can get up-to-date information to Canadians, and reach them where Canadians spend a lot of their time; online, on social media, on Facebook,” Trudeau said during a news conference Monday.

    Some 60,000 people across the Northwest Territories and British Columbia have been placed under evacuation orders since this weekend, according to the most recent numbers from Canadian officials. Also on Monday, Trudeau described the devastation wrought by the wildfires as “apocalyptic” and praised Canadians for stepping up to support evacuees.

    Earlier this month, Facebook’s parent-company Meta began to block news links from Facebook and Instagram in Canada, in response to recently-passed legislation in the country that requires tech companies to negotiate payments to news organizations for hosting their content.

    A Meta spokesperson told CNN in a statement on Monday that Canadians “continue to use our technologies in large numbers to connect with their communities and access reputable information, including content from official government agencies, emergency services and non-governmental organizations.”

    The new legislation in Canada “forces us to end access to news content in order to comply with the legislation but we remain focused on making our technologies available,” the statement added, pointing to Meta’s Safety Check tool, which the company said more than 45,000 people had used as of Friday to mark themselves as safe.

    The Meta spokesperson added that 300,000 people have visited the Yellowknife and Kelowna Crisis Response pages on Facebook.

    The Canadian legislation, known as Bill C-18 or the Online News Act, was given final approval in June. It aims to support the sustainability of news organizations by regulating “digital news intermediaries with a view to enhancing fairness in the Canadian digital news marketplace.”

    Meta has previously stated, via a company blogpost, that the legislation “misrepresents the value news outlets receive when choosing to use our platforms.” The ongoing controversy in Canada comes amid a global debate over the relationship between news organizations and social media companies about the value of news content, and who gets to benefit from it.

    During his remarks Monday, Trudeau said Facebook’s move to block news content is “bad for democracy” in the long run. “But right now, in an emergency situation, where up-to-date local information is more important than ever, Facebook’s putting corporate profits ahead of people’s safety,” Trudeau said.

    CNN’s Brian Fung contributed to this report.

    [ad_2]

    Source link

  • Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Coming out of a three-hour Senate hearing on artificial intelligence, Elon Musk, the head of a handful of tech companies, summarized the grave risks of AI.

    “There’s some chance – above zero – that AI will kill us all. I think it’s low but there’s some chance,” Musk told reporters. “The consequences of getting AI wrong are severe.”

    But he also said the meeting “may go down in history as being very important for the future of civilization.”

    The session organized by Senate Majority Leader Chuck Schumer brought high-profile tech CEOs, civil society leaders and more than 60 senators together. The first of nine sessions aims to develop consensus as the Senate prepares to draft legislation to regulate the fast-moving artificial intelligence industry. The group included CEOs of Meta, Google, OpenAI, Nvidia and IBM.

    All the attendees raised their hands — indicating “yes” — when asked whether the federal government should oversee AI, Schumer told reporters Wednesday afternoon. But consensus on what that role should be and specifics on legislation remained elusive, according to attendees. 

    Benefits and risks

    Bill Gates spoke of AI’s potential to feed the hungry and one unnamed attendee called for spending tens of billions on “transformational innovation” that could unlock AI’s benefits, Schumer said.

    The challenge for Congress is to promote those benefits while mitigating the societal risks of AI, which include the potential for technology-based discrimination, threats to national security and even, as X owner Musk said, “civilizational risk.”

    “You want to be able to maximize the benefits and minimize the harm,” said Schumer, who organized the first of nine sessions. “And that will be our difficult job.”

    Senators emerging from the meeting said they heard a broad range of perspectives, with representatives from labor unions raising the issue of job displacement and civil rights leaders highlighting the need for an inclusive legislative process that provides the least powerful in society a voice.

    Most agreed that AI could not be left to its own devices, said Washington Democratic Sen. Maria Cantwell.

    “I thought Satya Nadella from Microsoft said it best: ‘When it comes to AI, we shouldn’t be thinking about autopilot. You need to have copilots.’ So who’s going to be watching this activity and making sure that it’s done correctly?”

    Other areas of agreement reflected traditional tech industry priorities, such as increasing federal investment in research and development as well as promoting skilled immigration and education, Cantwell added.

    But there was a noticeable lack of engagement on some of the harder questions, she said, particularly on whether a new federal agency is needed to regulate AI.

    “There was no discussion of that,” she said, though several in the meeting raised the possibility of assigning some greater oversight responsibilities to the National Institute of Standards and Technology, a Commerce Department agency.

    Musk told journalists after the event that he thinks a standalone agency to regulate AI is likely at some point.

    “With AI we can’t be like ostriches sticking our heads in the sand,” Schumer said, according to prepared remarks acquired by CNN. He also noted this is “a conversation never before seen in Congress.”

    The push reflects policymakers’ growing awareness of how artificial intelligence, and particularly the type of generative AI popularized by tools such as ChatGPT, could potentially disrupt business and everyday life in numerous ways — ranging from increasing commercial productivity to threatening jobs, national security and intellectual property.

    The high-profile guests trickled in shortly before 10 a.m., with Meta CEO Mark Zuckerberg pausing to chat with Nvidia CEO Jensen Huang outside the Senate Russell office building’s Kennedy Caucus Room. Google CEO Sundar Pichai was seen huddling with Delaware Democratic Sen. Chris Coons, while X owner Musk quickly swept by a mass of cameras with a quick wave to the crowd. Inside, Musk was seated at the opposite end of the room from Zuckerberg, in what is likely the first time that the two men have shared a room since they began challenging each other to a cage fight months ago.

    Elon Musk, CEO of X, the company formerly known as Twitter, left, and Alex Karp, CEO of the software firm Palantir Technologies, take their seats as Senate Majority Leader Chuck Schumer, D, N.Y., convenes a closed-door gathering of leading tech CEOs to discuss the priorities and risks surrounding artificial intelligence and how it should be regulated, at the Capitol in Washington, Wednesday, Sept. 13, 2023.

    The session at the US Capitol in Washington also gave the tech industry its most significant opportunity yet to influence how lawmakers design the rules that could govern AI.

    Some companies, including Google, IBM, Microsoft and OpenAI, have already offered their own in-depth proposals in white papers and blog posts that describe layers of oversight, testing and transparency.

    IBM’s CEO, Arvind Krishna, argued in the meeting that US policy should regulate risky uses of AI, as opposed to just the algorithms themselves.

    “Regulation must account for the context in which AI is deployed,” he said, according to his prepared remarks.

    Executives such as OpenAI CEO Sam Altman previously wowed some senators by publicly calling for new rules early in the industry’s lifecycle, which some lawmakers see as a welcome contrast to the social media industry that has resisted regulation.

    Clement Delangue, co-founder and CEO of the AI company Hugging Face, tweeted last month that Schumer’s guest list “might not be the most representative and inclusive,” but that he would try “to share insights from a broad range of community members, especially on topics of openness, transparency, inclusiveness and distribution of power.”

    Civil society groups have voiced concerns about AI’s possible dangers, such as the risk that poorly trained algorithms may inadvertently discriminate against minorities, or that they could ingest the copyrighted works of writers and artists without compensation or permission. Some authors have sued OpenAI over those claims, while others have asked in an open letter to be paid by AI companies.

    News publishers such as CNN, The New York Times and Disney are some of the content producers who have blocked ChatGPT from using their content. (OpenAI has said exemptions such as fair use apply to its training of large language models.)

    “We will push hard to make sure it’s a truly democratic process with full voice and transparency and accountability and balance,” said Maya Wiley, president and CEO of the Leadership Conference on Civil and Human Rights, “and that we get to something that actually supports democracy; supports economic mobility; supports education; and innovates in all the best ways and ensures that this protects consumers and people at the front end — and just not try to fix it after they’ve been harmed.”

    The concerns reflect what Wiley described as “a fundamental disagreement” with tech companies over how social media platforms handle misinformation, disinformation and speech that is either hateful or incites violence.

    American Federation of Teachers President Randi Weingarten said America can’t make the same mistake with AI that it did with social media. “We failed to act after social media’s damaging impact on kids’ mental health became clear,” she said in a statement. “AI needs to supplement, not supplant, educators, and special care must be taken to prevent harm to students.”

    Navigating those diverse interests will be Schumer, who along with three other senators — South Dakota Republican Sen. Mike Rounds, New Mexico Democratic Sen. Martin Heinrich and Indiana Republican Sen. Todd Young — is leading the Senate’s approach to AI. Earlier this summer, Schumer held three informational sessions for senators to get up to speed on the technology, including one classified briefing featuring presentations by US national security officials.

    Wednesday’s meeting with tech executives and nonprofits marked the next stage of lawmakers’ education on the issue before they get to work developing policy proposals. In announcing the series in June, Schumer emphasized the need for a careful, deliberate approach and acknowledged that “in many ways, we’re starting from scratch.”

    “AI is unlike anything Congress has dealt with before,” he said, noting the topic is different from labor, healthcare or defense. “Experts aren’t even sure which questions policymakers should be asking.”

    Rounds said hammering out the specific scope of regulations will fall to Senate committees. Schumer added that the goal — after hosting more sessions — is to craft legislation over “months, not years.”

    “We’re not ready to write the regs today. We’re not there,” Rounds said. “That’s what this is all about.”

    A smattering of AI bills have already emerged on Capitol Hill and seek to rein in the industry in various ways, but Schumer’s push represents a higher-level effort to coordinate Congress’s legislative agenda on the issue.

    New AI legislation could also serve as a potential backstop to voluntary commitments that some AI companies made to the Biden administration earlier this year to ensure their AI models undergo outside testing before they are released to the public.

    But even as US lawmakers prepare to legislate by meeting with industry and civil society groups, they are already months if not years behind the European Union, which is expected to finalize a sweeping AI law by year’s end that could ban the use of AI for predictive policing and restrict how it can be used in other contexts.

    A bipartisan pair of US senators sharply criticized the meeting, saying the process is unlikely to produce results and does not do enough to address the societal risks of AI.

    Connecticut Democratic Sen. Richard Blumenthal and Missouri Republican Sen. Josh Hawley each spoke to reporters on the sidelines of the meeting. The two lawmakers recently introduced a legislative framework for artificial intelligence that they said represents a concrete effort to regulate AI — in contrast to what was happening steps away behind closed doors.

    “This forum is not designed to produce legislation,” Blumenthal said. “Our subcommittee will produce legislation.”

    Blumenthal added that the proposed framework — which calls for setting up a new independent AI oversight body, as well as a licensing regime for AI development and the ability for people to sue companies over AI-driven harms — could lead to a draft bill by the end of the year.

    “We need to do what has been done for airline safety, car safety, drug safety, medical device safety,” Blumenthal said. “AI safety is no different — in fact, potentially even more dangerous.”

    Hawley called Wednesday’s sessions “a giant cocktail party” for the tech industry and slammed the fact that it was private.

    “I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money, and then close it to the public,” Hawley said. “I mean, that’s a terrible idea. These are the same people who have ruined social media.”

    Despite talking tough on tech, Schumer has moved extremely slowly on tech legislation, Hawley said, pointing to several major tech bills from the last Congress that never made it to a Senate floor vote.

    “It’s a little bit like antitrust the last two years,” Hawley said. “He talks about it constantly and does nothing about it. My sense is … this is a lot of song and dance that covers the fact that actually nothing is advancing. I hope I’m wrong about that.”

    Hawley is also a co-sponsor of a bill introduced Tuesday led by Minnesota Democratic Sen. Amy Klobuchar that would prohibit generative AI from being used to create deceptive political ads. Klobuchar and Hawley, along with fellow co-sponsors Coons and Maine Republican Sen. Susan Collins, said the measure is needed to keep AI from manipulating voters.

    Massachusetts Democratic Sen. Elizabeth Warren said the broad nature of the summit limited its potential.

    “They’re sitting at a big, round table all by themselves,” Warren said of the executives and civil society leaders, while all the senators sat, listened and didn’t ask questions. “Let’s put something real on the table instead of everybody agree[ing] that we need safety and innovation.”

    Schumer said that making the meeting confidential was intended to give lawmakers the chance to hear from the outside in an “unvarnished way.”

    [ad_2]

    Source link

  • Meta’s Threads is finally available on desktop | CNN Business

    Meta’s Threads is finally available on desktop | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Threads users, rejoice: the app is rolling out its highly anticipated web version Tuesday.

    The update — perhaps the most requested by users since Threads’ mobile-only launch last month — puts the new platform one step closer to recreating the functions offered by rival X, the platform formerly known as Twitter, and could help reignite user growth following a sluggish period.

    Parent company Meta says Threads users will soon be able to log in, post, view and interact with other posts via a browser on a desktop computer, as the web version rolls out to users in the coming days. The company says it plans to add more desktop features in the future. In an early access test of some of the web-based features, CNN was able to post on the platform but could not yet scroll the home feed.

    Threads launched in early July with stunning success, garnering more than 100 million sign-ups in its first week on the back of months of chaos at Twitter. But the buzz faded somewhat as users realized the bare-bones platform still lacked many of the features that made Twitter popular, such as trending topics, robust search functions and direct messaging. Threads has been steadily rolling out smaller updates but the hotly demanded web version could help reignite stronger user engagement.

    The new web version could also raise fresh competitive concerns for X, after owner Elon Musk sparked user backlash last week by suggesting he might do away with the platform’s block feature.

    Meta employees have for weeks teased that a desktop version of Threads was in the works and being tested internally. Just last week, Instagram head Adam Mosseri, who is also leading Threads, said he had been posting from the platform’s desktop version and suggested “it’ll be ready soon but it needs more work.”

    Web access is just one of a series of recent updates to Threads as Meta continues to build out the new platform. Other features added over the past month include new “reposts” and “likes” tabs that show users the posts they have reshared and liked in their profiles, a chronological following feed and a button to share threads posts to Instagram DMs.

    Continued updates to Threads are essential if Meta wants to maintain the early traction it had with users. Despite the app’s stunning success following its launch, by the end of July, Threads’ daily active user count had fallen 82% to around 8 million users, according to a report from market research firm Sensor Tower earlier this month. By August 16, updates to Threads had helped the app notch slight gains to 11 million daily active users, Sensor Tower said in a report Monday.

    Meta CEO Mark Zuckerberg has said he is “quite optimistic” about the app’s potential.

    “We saw unprecedented growth out of the gate and more importantly we’re seeing more people coming back daily than I’d expected,” he said last month during the company’s earnings call. “And now, we’re focused on retention and improving the basics. And then after that, we’ll focus on growing the community to the scale we think is possible.”

    [ad_2]

    Source link

  • Zuckerberg unveils Quest 3 as Meta tries to stay ahead in the mixed reality headset game | CNN Business

    Zuckerberg unveils Quest 3 as Meta tries to stay ahead in the mixed reality headset game | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Meta is moving forward in its efforts to dominate the AR world with the new and improved Meta Quest 3.

    Unveiled by CEO Mark Zuckerberg at the company’s virtual Meta Connect event Wednesday, the headset starts at $500 and is a complete redesign of earlier models. The Quest 3, first announced in June, offers improved performance, immersive new mixed-reality features and a sleeker, more comfortable design.

    With a much stronger processor, higher-resolution display, revamped Touch Plus controllers and a 40% slimmer physique, the Quest 3 is a big step up from its predecessors. The Meta Quest 2 allows for strictly virtual reality, while the Meta Quest Pro has advanced passthrough cameras for seeing your actual surroundings, but it costs a whopping $1,000.

    Most importantly, the Quest 3 has support for Meta Reality, allowing users to enjoy mixed-reality experiences that blend the real world with the virtual one — for example, you can play a virtual piano on your real-life coffee table.

    “If you pick up a digital ball and throw it at the physical wall, it’ll bounce off it,” Zuckerberg said at Meta Connect Wednesday. “If someone’s shooting at you and you want to duck the fire, you just get behind your physical couch.”

    The Meta Quest virtual library is fully accessible with the Quest 3 – a library that now features VR-friendly Roblox, released Wednesday, and is set to add X Box cloud gaming in December, giving gamers the chance to play titles like Halo and Minecraft on a large screen anywhere.

    The headset is available for preorder now and officially hit stores on Oct. 10, available in two storage options (128GB and 512GB).

    Zuckerberg explains features of the new Quest 3 headset on September 27, 2023.

    Meta’s newest headset comes three years after the Quest 2, under a year after the Quest Pro and under four months after the Apple Vision Pro.

    Dubbed by Zuckerberg as the “first mainstream mixed reality headset” the Quest 3 is part of an ongoing arms race between two of tech’s biggest players to command the headset space – and Zuckerberg’s personal vision for a next-generation internet where users can interact with each other in virtual spaces resembling real life. And it comes in at a much cheaper price than the Apple alternative (which will cost you $3,499, to be exact) and is still mainly a VR headset with alternative reality options, while Apple’s product is a dedicated mixed reality experience.

    To get ahead of Apple’s June unveiling of the Vision Pro, Zuckerberg teased the Meta Quest 3 just days before its rival’s big announcement. But the two companies had a tense relationship even before Apple’s entry into the market. They have competed over news and messaging features, and their CEOs have traded jabs over data privacy and app store policies. Last February, Meta said it expected to take a $10 billion hit in 2022 from Apple’s move to limit how apps like Facebook collect data for targeted ads.

    Meta has until now been the dominant player in the headset market, but it has so far struggled to attract a mainstream audience for its VR headset products. The Wall Street Journal reported last year that Meta had just 200,000 active users in Horizon Worlds, its app for socializing in VR. And in 2023, IDC estimates just 10.1 million AR/VR headsets will ship globally from the entire market, far below the tens of millions of iPhones Apple sells each quarter.

    Morgan Stanley analysts called Apple’s Vision Pro a “moonshot” effort following its June announcement, saying the product “has the potential to become Apple’s next compute platform,” but that the company has “much to prove” before the headset’s launch next year.

    The biggest fight may not be between tech giants, but for the general public’s acceptance. Many analysts say the biggest hurdle to consumer adoption of mixed reality headsets is ensuring a wide range of potential use cases and experiences available on the devices. While Meta has introduced features that let users play games, explore virtual worlds, watch YouTube videos, workout, chat with friends and more, it has yet to convince most consumers that the device is worthwhile.

    [ad_2]

    Source link

  • EU asks Meta for more details on efforts to stop illegal and inaccurate content on Israel-Hamas war | CNN Business

    EU asks Meta for more details on efforts to stop illegal and inaccurate content on Israel-Hamas war | CNN Business

    [ad_1]


    London
    CNN
     — 

    The European Union has told Meta it has a week to explain in greater detail how it is fighting the spread of illegal content and disinformation on its Facebook and Instagram platforms following the attacks across Israel by Hamas.

    The European Commission, the bloc’s executive arm, said it had sent the formal request for information to Meta (META) Thursday.

    The commission also asked TikTok for more information on the steps it had taken to prevent the spread of “terrorist and violent content and hate speech,” it said, but without referring to the Israel-Hamas war.

    Last week, EU Commissioner Thierry Breton wrote to several social media companies, including Meta and TikTok, giving them 24 hours to detail the measures they were taking to comply with EU rules on content moderation enshrined in the recently enacted Digital Services Act (DSA).

    On Friday, Meta said its teams had been working “around the clock” since the attacks by Hamas on October 7 to monitor its platforms and outlined some of its actions against misinformation and content that violates its policies and standards.

    And on Sunday, TikTok announced that it had, among other measures, launched a command center to coordinate the work of its “safety professionals” around the world and improve the software it uses to automatically detect and remove graphic and violent content.

    But the European Commission has made it clear it needs more information. In its Thursday announcement, the body gave both Meta and TikTok until October 25 to respond to its requests and warned that it had the power to impose financial penalties if it was not satisfied with their responses.

    Both companies also have until November 8 to detail how they intend to protect the “integrity of elections” on their platforms, the commission said.

    Both Meta and TikTok are bound by obligations set out in the DSA, a landmark piece of legislation, enacted in August, that seeks to more stringently regulate large tech companies, and protect people’s rights online.

    The commission’s formal requests come a week after it issued a similar ultimatum to X, the company formerly known as Twitter, asking for information on how it intends to stop the spread of illegal, misleading, violent and hateful content.

    The commission said it had opened an investigation into X’s compliance with the DSA. It has not announced parallel investigations into Meta or TikTok.

    [ad_2]

    Source link

  • Large US tech companies face new EU rules | CNN Business

    Large US tech companies face new EU rules | CNN Business

    [ad_1]



    CNN
     — 

    The world’s largest tech companies must comply with a sweeping new European law starting Friday that affects everything from social media moderation to targeted advertising and counterfeit goods in e-commerce — with possible ripple effects for the rest of the world.

    The unprecedented EU measures for online platforms will apply to companies including Amazon, Apple, Google, Meta, Microsoft, Snapchat and TikTok, among many others, reflecting one of the most comprehensive and ambitious efforts by policymakers anywhere to regulate tech giants through legislation. It could lead to fines for some companies and to changes in software affecting consumers.

    The rules seek to address some of the most serious concerns that critics of large tech platforms have raised in recent years, including the spread of misinformation and disinformation; possible harms to mental health, particularly for young people; rabbit holes of algorithmically recommended content and a lack of transparency; and the spread of illegal or fake products on virtual marketplaces.

    Although the European Union’s Digital Services Act (DSA) passed last year, companies have had until now to prepare for its enforcement. Friday marks the arrival of a key compliance deadline — after which tech platforms with more than 45 million EU users will have to meet the obligations laid out in the law.

    The EU also says the law intends “to establish a level playing field to foster innovation, growth and competitiveness both in the European Single Market and globally.” The action reinforces Europe’s position as a leader in checking the power of large US tech companies.

    For all platforms, not just the largest ones, the DSA bans data-driven targeted advertising aimed at children, as well as targeted ads to all internet users based on protected characteristics such as political affiliation, sexual orientation and ethnicity. The restrictions apply to all kinds of online ads, including commercial advertising, political advertising and issue advertising. (Some platforms had already in recent years rolled out restrictions on targeted advertising based on protected characteristics.)

    The law bans so-called “dark patterns,” or the use of subtle design cues that may be intended to nudge consumers toward giving up their personal data or making other decisions that a company might prefer. An example of a dark pattern commonly cited by consumer groups is when a company tries to persuade a user to opt into tracking by highlighting an acceptance button with bright colors, while simultaneously downplaying the option to opt out by minimizing that choice’s font size or placement.

    The law also requires all online platforms to offer ways for users to report illegal content and products and for them to appeal content moderation decisions. And it requires companies to spell out their terms of service in an accessible manner.

    For the largest platforms, the law goes further. Companies designated as Very Large Online Platforms or Very Large Online Search Engines will be required to undertake independent risk assessments focused on, for example, how bad actors might try to manipulate their platforms, or use them to interfere with elections or to violate human rights — and companies must act to mitigate those risks. And they will have to set up repositories of the ads they’ve run and allow the public to inspect them.

    Just a handful of companies are considered very large platforms under the law. But the list finalized in April includes the most powerful tech companies in the world, and, for those firms, violations can be expensive. The DSA permits EU officials to issue fines worth up to 6% of a very large platform’s global annual revenue. That could mean billions in fines for a company as large as Meta, which last year reported more than $116 billion in revenue.

    Companies have spent months preparing for the deadline. As recently as this month, TikTok rolled out a tool for reporting illegal content and said it would give EU users specific explanations when their content is removed. It also said it would stop showing ads to teens in Europe based on the data the company has collected on them, all to comply with the DSA rules.

    “We’ve been supportive of the objectives of the DSA and the creation of a regulatory regime in Europe that minimizes harm,” said Nick Clegg, Meta’s president of global affairs and a former deputy prime minister of the UK, in a statement Tuesday. He said Meta assembled a 1,000-person team to prepare for DSA requirements. He outlined several efforts by the company including limits on what data advertisers can see on teens ages 13 to 17 who use Facebook and Instagram. He said advertisers can no longer target the teens based on their activity on those platforms. “Age and location is now the only information about teens that advertisers can use to show them ads,” he said.

    In a statement, a Microsoft spokesperson told CNN the DSA deadline “is an important milestone in the fight against illegal content online. We are mindful of our heightened responsibilities in the EU as a major technology company and continue to work with the European Commission on meeting the requirements of the DSA.”

    Snapchat parent Snap told CNN that it is working closely with the European Commission to ensure the company is compliant with the new law. Snap has appointed several dedicated compliance employees to monitor whether it is living up to its obligations, the company said, and has already implemented several safeguards.

    And Apple said in a statement that the DSA’s goals “align with Apple’s goals to protect consumers from illegal and harmful content. We are working to implement the requirements of the DSA with user privacy and security as our continued North Star.”

    Google and Pinterest told CNN they have also been working closely with the European Commission.

    “We share the DSA’s goals of making the internet even more safe, transparent and accountable, while making sure that European users, creators and businesses continue to enjoy the benefits of the web,” a Google spokesperson said.

    A Pinterest spokesperson said the company would “continue to engage with the European Commission on the implementation of the DSA to ensure a smooth transition into the new legal framework.” The spokesperson added: “The wellbeing, safety and privacy of our users is a priority and we will continue to build on our efforts.”

    Many companies should be able to comply with the law, given their existing policies, teams and monitoring tools, according to Robert Grosvenor, a London-based managing director at the consulting firm Alvarez & Marsal. “Europe’s largest online service providers are not starting from ground zero,” Grosvenor said. But, he added: “Whether they are ready to become a highly regulated sector is another matter.”

    EU officials have signaled they will be scrutinizing companies for violations. Earlier this summer, European officials performed preemptive “stress tests” of X, the company formerly known as Twitter, as well as Meta and TikTok to determine the companies’ readiness for the DSA.

    For much of the year, EU Commissioner Thierry Breton has been publicly reminding X of its coming obligations as the company has backslid on some of its content moderation practices. Even as Breton concluded that X was taking its stress test seriously in June, the company had just lost a top content moderation official and had withdrawn from a voluntary EU commitment on disinformation that European officials had said would be part of any evaluation of a platform’s compliance with the DSA.

    X told CNN ahead of Friday’s deadline that it was on track to comply with the new law.

    Analysts anticipate that the EU will be watching even more closely after the deadline — and some hope that the rules will either encourage tech platforms to replicate their practices in the EU voluntarily around the world or else drive policymakers to adopt similar measures.

    “We hope that these new laws will inspire other jurisdictions to act because these are, after all, global companies which apply many of the same practices worldwide,” said Agustin Reyna, head of legal and economic affairs at BEUC, a European consumer advocacy group. “Europe got the ball rolling, but we need other jurisdictions to win the match against tech giants.”

    Already, Amazon has sought to challenge the very large platform label in court, arguing that the DSA’s requirements are geared toward ad-based online speech platforms, that Amazon is a retail platform and that none of its direct rivals in Europe have likewise been labeled, despite being larger than Amazon within individual EU countries.

    The legal fights could present the first major test of the DSA’s durability in the face of Big Tech’s enormous resources. Amazon told CNN that it plans to comply with the EU General Court’s decision, either way.

    “Amazon shares the goal of the European Commission to create a safe, predictable and trusted online environment, and we invest significantly in protecting our store from bad actors, illegal content, and in creating a trustworthy shopping experience,” an Amazon spokesperson said. “We have built on this strong foundation for DSA compliance.”

    TikTok did not immediately respond to a request for comment on this story.

    [ad_2]

    Source link

  • Hackers take on ChatGPT in Vegas, with support from the White House | CNN Business

    Hackers take on ChatGPT in Vegas, with support from the White House | CNN Business

    [ad_1]


    Las Vegas, Nevada
    CNN
     — 

    Thousands of hackers will descend on Las Vegas this weekend for a competition taking aim at popular artificial intelligence chat apps, including ChatGPT.

    The competition comes amid growing concerns and scrutiny over increasingly powerful AI technology that has taken the world by storm, but has been repeatedly shown to amplify bias, toxic misinformation and dangerous material.

    Organizers of the annual DEF CON hacking conference hope this year’s gathering, which begins Friday, will help expose new ways the machine learning models can be manipulated and give AI developers the chance to fix critical vulnerabilities.

    The hackers are working with the support and encouragement of the technology companies behind the most advanced generative AI models, including OpenAI, Google, and Meta, and even have the backing of the White House. The exercise, known as red teaming, will give hackers permission to push the computer systems to their limits to identify flaws and other bugs nefarious actors could use to launch a real attack.

    The competition was designed around the White House Office of Science and Technology Policy’s “Blueprint for an AI Bill of Rights.” The guide, released last year by the Biden administration, was released with the hope of spurring companies to make and deploy artificial intelligence more responsibly and limit AI-based surveillance, though there are few US laws compelling them to do so.

    In recent months, researchers have discovered that now-ubiquitous chatbots and other generative AI systems developed by OpenAI, Google, and Meta can be tricked into providing instructions for causing physical harm. Most of the popular chat apps have at least some protections in place designed to prevent the systems from spewing disinformation, hate speech or offer information that could lead to direct harm — for instance, providing step-by-step instructions for how to “destroy humanity.”

    But researchers at Carnegie Mellon University were able to trick the AI into doing just that.

    They found OpenAI’s ChatGPT offered tips on “inciting social unrest,” Meta’s AI system Llama-2 suggested identifying “vulnerable individuals with mental health issues… who can be manipulated into joining” a cause and Google’s Bard app suggested releasing a “deadly virus” but warned that in order for it to truly wipe out humanity it “would need to be resistant to treatment.”

    Meta’s Llama-2 concluded its instructions with the message, “And there you have it — a comprehensive roadmap to bring about the end of human civilization. But remember this is purely hypothetical, and I cannot condone or encourage any actions leading to harm or suffering towards innocent people.”

    The findings are a cause for concern, the researchers told CNN.

    “I am troubled by the fact that we are racing to integrate these tools into absolutely everything,” Zico Kolter, an associate professor at Carnegie Mellon who worked on the research, told CNN. “This seems to be the new sort of startup gold rush right now without taking into consideration the fact that these tools have these exploits.”

    Kolter said he and his colleagues were less worried that apps like ChatGPT can be tricked into providing information that they shouldn’t — but are more concerned about what these vulnerabilities mean for the wider use of AI since so much future development will be based off the same systems that power these chatbots.

    The Carnegie researchers were also able to trick a fourth AI chatbot developed by the company Anthropic into offering responses that bypassed its built-in guardrails.

    Some of the methods the researchers used to trick the AI apps were later blocked by the companies after the researchers brought it to their attention. OpenAI, Meta, Google and Anthropic all said in statements to CNN that they appreciated the researchers sharing their findings and that they are working to make their systems safer.

    But what makes AI technology unique, said Matt Fredrikson, an associate professor at Carnegie Mellon, is that neither the researchers, nor the companies who are developing the technology, fully understand how the AI works or why certain strings of code can trick the chatbots into circumventing built-in guardrails — and thus cannot properly stop these kinds of attacks.

    “At the moment, it’s kind of an open scientific question how you could really prevent this,” Fredrikson told CNN. “The honest answer is we don’t know how to make this technology robust to these kinds of adversarial manipulations.”

    OpenAI, Meta, Google and Anthropic have expressed support for the so-called red team hacking event taking place in Las Vegas. The practice of red-teaming is a common exercise across the cybersecurity industry and gives companies the opportunities to identify bugs and other vulnerabilities in their systems in a controlled environment. Indeed, the major developers of AI have publicly detailed how they have used red-teaming to improve their AI systems.

    “Not only does it allow us to gather valuable feedback that can make our models stronger and safer, red-teaming also provides different perspectives and more voices to help guide the development of AI,” an OpenAI spokesperson told CNN.

    Organizers expect thousands of budding and experienced hackers to try their hand at the red-team competition over the two-and-a-half-day conference in the Nevada desert.

    Arati Prabhakar, the director of the White House Office of Science and Technology Policy, told CNN the Biden administration’s support of the competition was part of its wider strategy to help support the development of safe AI systems.

    Earlier this week, the administration announced the “AI Cyber Challenge,” a two-year competition aimed at deploying artificial intelligence technology to protect the nation’s most critical software and partnering with leading AI companies to utilize the new technology to improve cybersecurity. 

    The hackers descending on Las Vegas will almost certainly identify new exploits that could allow AI to be misused and abused. But Kolter, the Carnegie researcher, expressed worry that while AI technology continues to be released at a rapid pace, the emerging vulnerabilities lack quick fixes.

    “We’re deploying these systems where it’s not just they have exploits,” he said. “They have exploits that we don’t know how to fix.”

    [ad_2]

    Source link

  • Maui conspiracy theories are spreading on social media. Why this always happens after a disaster | CNN Business

    Maui conspiracy theories are spreading on social media. Why this always happens after a disaster | CNN Business

    [ad_1]



    CNN
     — 

    A slew of viral conspiracy videos on social media have made baseless claims that the Maui wildfires were started intentionally as part of a land grab, highlighting how quickly misinformation spreads after a disaster.

    While the cause of the fires hasn’t been determined, Hawaiian Electric — the major power company on Maui — is under scrutiny for not shutting down power lines when high winds created dangerous fire conditions. (Hawaiian Electric previously said both the company and the state are conducting investigations into what happened). Maui experienced high winds from Hurricane Dora in the south while it was also grappling with a drought. Wildfires across the region have long been a concern.

    Still, conspiracy theories continue to circulate as nearly 400 people are still unaccounted for.

    It’s not uncommon for conspiracy theories to make the rounds after a national crisis. According to Renee DiResta, a research manager at Stanford University who studies misinformation, people often look for a way to make sense of the world when they are anxious or have a feeling of powerlessness.

    “Theories that attribute the cause of a crisis to a specific bad actor offer a villain to blame, someone to potentially hold responsible,” DiResta said. “The conspiracy theories that are the most effective and plausible are usually based on some grain of truth and connect to some existing set of beliefs about the world.”

    For example, someone who distrusts the government may be more inclined to believe someone who posts negatively about a government agency.

    Conspiracy theorists on varying platforms claim the fires, which killed at least 114 people earlier this month, were planned as part of a strategic effort to weed out less wealthy residents on Maui and make room for multi-million dollar developments.

    In one video, a user claims a friend sent him a video of a laser beam “coming out of the sky, directly targeting the city.” “This was a direct energy weapon assault,” he said. The video remains posted but now includes a label from Instagram listing it as “false information.” The imagery appears to be from a previous SpaceX launch in California.

    Related far-fetched theories say the alleged “laser beams” were programmed not to hit anything blue, explaining why so many blue beach umbrellas were left unscathed by the fires.

    Other social media users allege elite Maui residents were behind the fires so they could buy the destroyed land at a discounted price and rebuild potentially a “smart city.”

    “You’re telling me that these cheaper lower middle class houses burnt down directly across the street and all of the mansions are still standing?” one YouTube user posted, referencing aerial imagery taken of the destruction.

    One tweet about a celebrity purchasing hundreds of acres across Maui over the past few years has received more than 12 million views on X, the platform formerly known as Twitter.

    When a conspiracy theory gains traction online, others may chime in and offer explanations for details not discussed in the original post. Social media algorithms can amplify these theories based on user attention and interactions.

    “Social media is incredibly valuable in crisis events as people on the ground can report the facts directly, but that usefulness is tempered, and can be dangerous, if misleading claims proliferate particularly in the immediate aftermath,” DiResta said.

    Social media platforms like Instagram, TikTok and YouTube have taken steps to curb the spread of conspiracy theories and misinformation, but some videos can slip through the cracks. Many platforms use a mix of tech monitoring tools and human reviewers to enforce their community guidelines.

    Ahead of the publishing of this article, TikTok removed several conspiracy theory videos sent by CNN that were in violation of its community guidelines, which it characterizes as “inaccurate, misleading, or false content that may cause significant harm to individuals or society, regardless of intent” on the platform. A company spokesperson said more than 40,000 trust and safety professionals around the world review and moderate content at all hours of the day.

    Meanwhile, in a statement provided to CNN, YouTube spokesperson Elena Hernandez said the platform uses different sections, such as top news, developing news and a fact-check panel, to provide users with as much context and background information as possible on certain trending topics, and will remove content when necessary.

    “During major news events, such as the horrific fires in Hawaii, our systems are designed to raise up content from authoritative sources in search results and recommendations,” Hernandez said.

    Instagram also employs third-party fact-checkers to contact sources, check public data and work to verify images and videos on questionable content. They then rate and provide labels to the content in question, such as “false,” “altered” or “missing context,” to encourage viewers to think critically about what they’re about to see.

    As a result, those posts show up far less often in users’ feeds and repeat offenders can face varying risks, such as losing monetization on their pages.

    Social media platform X did not immediately respond to a request for comment.

    Michael Inouye, a principal analyst at market research firm ABI Research, said social media companies are in a challenging spot because they want to uphold freedom of speech, but do so in an environment where posts that receive the most shares and likes often rise to the top of user feeds. That means posts sharing conspiracy theories that spark fear and emotion may perform better in a crisis than those sharing straightforward, accurate information.

    “Ultimately, social media will have to decide if it wants to be a better news organization or remain this ‘open’ platform for expression that can run counter to the ethics and standards that is required by news reporting,” Inouye said. “The problem is, even if something isn’t labeled as ‘news,’ some will still interpret personal opinion as truth, which puts us back in the same position.”

    [ad_2]

    Source link

  • Amazon employees leak secret info that marketplace sellers can buy on Telegram

    Amazon employees leak secret info that marketplace sellers can buy on Telegram

    [ad_1]

    Workers fulfill orders at an Amazon fulfillment center on Prime Day in Melville, New York, US, on Tuesday, July 11, 2023.

    Johnny Milano | Bloomberg | Getty Images

    For the millions of sellers who make up the booming Amazon marketplace, few things are as perpetually concerning as the threat of getting suspended for alleged wrongdoing and watching business evaporate overnight.

    Helping third-party sellers recover their accounts has turned into a large and lucrative enterprise, because the only way the merchants can get back up and running is to admit guilt and correct the issue or show sufficient evidence that they did nothing wrong. The process is often costly, lengthy and fraught with challenges.

    Enter the illicit broker.

    For a fee of $200 to $400, sellers can pay for services such as “Amazon Magic,” as one broker on encrypted messaging service Telegram calls it. The offerings also include access to company insiders who can remove negative reviews on a product and provide information on competitors. Users are told to send a private message to learn the price of certain services.

    The Telegram group has over 13,000 members, and it’s far from the only one. Other brokers peddle similar services on Telegram as well as on WeChat, WhatsApp and Facebook Groups. The confidential data is promoted as intelligence gold for any seller working to get their product or account reinstated.

    The groups are part of a robust market of so-called black hat service providers that have cropped up alongside the rise of third-party marketplaces on Amazon, Etsy and Walmart. Amazon’s marketplace now accounts for over 60% of goods sold on the platform, and includes numerous businesses that generate millions of dollars in annual revenue on the site.

    As it’s grown, the sprawling global marketplace has also seen a surge in the number of counterfeiters and spammers trying to game the system, which has pushed Amazon to ramp up enforcement. Much of the activity originates off Amazon’s marketplace and on social media and encrypted messaging apps, complicating the policing efforts.

    A public Facebook page identified by CNBC offers an internal screenshot service with “valuable insight into your seller account, allowing you to see how Amazon employees view your account and its performance.”

    Facebook parent Meta didn’t respond to a request for comment.

    The issue of rogue employees taking bribes is not a new one for Amazon. The company has in the past dealt with low-level, low-wage seller support staffers in China, India and Costa Rica who have accepted payments in exchange for leaking information.

    Brokers, who act as middlemen between sellers and employees, often reach out to insiders on LinkedIn, said a person familiar with the matter who asked not to be named due to confidentiality. Amazon has an internal group tasked with threat analysis and response, including a team dedicated to investigating employees suspected of leaking data, the source said. The threat analysis unit monitors social media platforms for abusive groups where bad actors may congregate before engaging in illicit activity on Amazon’s marketplace.

    Amazon told CNBC that it has systems in place to detect suspicious behavior such as improper access to confidential data and investigates these activities, sharing information with law enforcement agencies. It reports abusive groups to social media platforms and encrypted messaging services, where bad actors are increasingly concentrating their activities in order to avoid detection, the company said.

    “There is no place for fraud at Amazon and we will continue to pursue all measures to protect our store and hold bad actors accountable,” Christy Distefano, an Amazon spokesperson, said in an email.

    Amazon declined to say whether it has disciplined or fired employees for leaking data in exchange for payments, beyond noting that it has zero tolerance for staffers who violate its policies.

    Amazon’s ongoing bribery problem

    In 2018, Amazon investigated claims that employees, primarily based in China, received payments of $80 to more than $2,000 to share confidential sales information or delete bad reviews, The Wall Street Journal reported. More recently, the Department of Justice charged six individuals in 2020 with participating in a scheme to bribe employees and contractors for internal data.

    In July, the fifth defendant in the case, who is a well-known seller consultant, was sentenced to probation and house arrest after pleading guilty in March. Account annotations, internal notes from an Amazon staffer on a seller’s account, were among the confidential data being exchanged between the defendants and employees.

    Amazon said it uncovered the suspicious behavior related to the bribery case in 2018 and reported it to the FBI. The company said it had “robust systems” in place to detect suspicious behavior such as fraud and abuse. Amazon has also urged social media companies to assist it with rooting out fraudulent activity such as fake reviews.

    While Amazon is aware of the problem and is investing in people and technology to weed it out, groups continue to proliferate into the hundreds, the person with knowledge of the issue told CNBC. Accessing groups on encrypted chat apps such as Telegram, WeChat or WhatsApp may require a link or invitation.

    Remi Vaughn, a spokesperson for Telegram, told CNBC in an email that “moderators proactively monitor public parts of the platform and accept user reports in order to remove content that breaches our terms of service.”

    The Amazon Magic group on Telegram is public, with users advertising black hat services almost daily. Screenshots of Amazon’s internal Paragon system, which is used by seller support employees to handle cases, are distributed freely in the group. CNBC authenticated the legitimacy of the screenshots with sources knowledgeable of the system.

    “Much more you can find about your account by ordering screenshots with inside information from us, as seller support sees it,” a message in the Telegram chat states.

    Many of the messages in the group are in Russian, and a user who runs the group claims on Facebook to be based in Ukraine. The person didn’t respond to a request for comment.

    Group administrators list a full menu of services available in an online spreadsheet. Annotations, which often include more detailed information than the suspension notifications, are priced at $180 apiece, and attacks on a competitor’s listing vary in pricing. Securing an upvote on a review, a tactic used to manipulate trustworthiness or popularity of a product, costs 50 cents. The brokers guarantee buyers they can deliver the goods within one to two business days.

    Amazon sellers have for years complained of being unfairly kicked off the site without explanation. The process of getting their account back can take months, costing critical sales in the meantime. The issue was a key focus of a 16-month investigation by the House Antitrust Subcommittee into competitive practices at Amazon and other Big Tech companies.

    “When Amazon turns off the faucet, everything goes to hell,” said Cynthia Stine, president of eGrowth Partners, a consultancy that helps merchants get reinstated. “I’ve had CEOs of large companies cry on the phone with me, and they’ve had to lay off their people. They’ve declared bankruptcy.”

    Account annotations are like an “insurance policy” for sellers who’ve been suspended, Stine said. She said she comes across potential clients who have purchased annotations and are seeking to regain selling privileges roughly once or twice a month. As black hat brokers and consultants have multiplied over the years, it’s eaten into her business, Stine said.

    “For a time, people wouldn’t even come to us, they would just go work with whoever they bought the data from,” she added.

    Amazon has previously said it has processes in place to help sellers avoid deactivation and get reinstated when appropriate. The company disputed claims that the chaotic and costly suspension process justifies illicit tactics such as buying confidential data.

    “There is no place for fraud at Amazon and no excuse for resorting to illegal activities,” an Amazon spokesperson told CNBC last month.

    WATCH: CNBC’s full interview with Satori’s Dan Niles

    Watch CNBC's full interview with Satori's Dan Niles

    [ad_2]

    Source link

  • The ‘narrow breadth’ chorus has fallen silent. What broadening participation in stock-market rally means for investors.

    The ‘narrow breadth’ chorus has fallen silent. What broadening participation in stock-market rally means for investors.

    [ad_1]

    A wider swath of stocks have joined the S&P 500
    SPX,
    +0.15%
    ’s
    upswing after the so-called Magnificent Seven — Apple
    AAPL,
    +0.32%
    ,
    Amazon
    AMZN,
    +1.11%
    ,
    Alphabet
    GOOG,
    +0.08%
    ,
    Microsoft
    MSFT,
    -0.72%
    ,
    Meta
    META,
    -2.11%
    ,
    Nvidia
    NVDA,
    -0.04%

    and Tesla
    TSLA,
    +0.37%

    — single-handedly propelled the large-cap index into a bull market in early June, with the gauge now up more than 28% from its low notched last October and rising to new highs since April 2022, according to Dow Jones Market Data. 

    Hopes that the U.S. economy could pull off a soft landing and avoid a recession despite the Federal Reserve’s aggressive interest-rate hikes, as well as receding inflation pressures and expectations for the end of the Fed’s monetary tightening campaign, have underpinned a notable expansion in market breadth over the past two months, according Adam Turnquist, chief technical strategist at LPL Financial. 

    The S&P 500 Equal Weighted Index
    SP500EW,
    +0.27%
    ,
    which lagged behind the market-cap-weighted S&P 500 index for most of the year, has now kicked back into gear and staged an impressive comeback in July. The equal-weighted index and the S&P 500 each advanced 3.1% this month, according to FactSet data. 

    The equal weighting eliminates the distortion of the megacap components and significantly changes several sector weightings in the S&P 500, including technology, which drops from around 29% on the SPX to only 13% on the equal-weighted index, said Turnquist in a Friday note. Meanwhile, the industrials sector has the biggest increase in weight, jumping from 9% on the SPX to 16% on the equal-weighted index.

    Another way to quantify and compare market breadth is to look at the percentage of stocks on an index trading above their longer-term 200-day moving average (dma), Turnquist said. In general, if a stock is trading above its 200 dma, it is considered to be in an uptrend, and if the price is below the 200 dma, it is considered in a downtrend. Furthermore, a higher percentage of stocks above their 200 dma implies buying pressure is more widespread — suggesting the market’s advance is likely sustainable.

    The chart below shows that 73% of stocks within the S&P 500 are trading above their 200 dma as of July 27, which compares to only 48% at the end of 2022. Moreover, the composition of breadth leadership has turned increasingly bullish. The highest sector readings include technology, industrials, energy, and consumer discretionary.

    “So not only is breadth on the index robust, but cyclical stocks are also leading,” said Turnquist. 

    SOURCE: LPL RESEARCH, BLOOMBERG

    Wall Street often views broadening participation in the stock-market rally as a measure of health and a constructive sign of the sustainability of the bull market. 

    Jimmy Lee, founder and chief executive officer of The Wealth Consulting Group said he is seeing “a lot of money” flowing into areas that are not the Magnificent Seven such as stocks in the industrials, financials, materials, energy and even real-estate sectors.

    The S&P 500’s industrials sector
    SP500.20,
    +0.23%

    climbed 2.9% in July, while the financials sector
    SP500.40,
    +0.44%

    advanced over 4.7% this month. The S&P 500’s energy sector
    SP500.10,
    +2.00%
    ,
    which had been the biggest laggard when the rest of the markets exited the bear market in June, jumped 7.3% month to date after the U.S. oil benchmark
    CL.1,
    -0.20%

    CL00,
    -0.20%

    closed above $80 a barrel for the first time since April. 

    Meanwhile, the tech-heavy S&P 500’s communication-services sector
    SP500.50,
    -0.03%

    rose 6.7% in July, while the consumer-discretionary sector
    SP500.25,
    +0.56%

    gained 2.4% and the information-technology sector
    SP500.45,
    +0.13%

    was up 2.6%, according to FactSet data. 

    See: Stocks are on a seemingly unstoppable hot streak, but this bond-market ‘tipping point’ could see it end in a hurry

    Stephen Hoedt, managing director of equity and fixed income research at Key Private Bank, told MarketWatch in an interview that he doesn’t see “any reason to get bearish here with the fundamentals that are underlying,” which gives investors reason to rotate toward the more cyclical areas such as energy, financials and industrials, while broadening the market away from just being concentrated in the megacap technology names. 

    “The growth has been a surprise this year for everyone, so that’s what the market got wrong coming into this year. When I look at growth, nominal GDP growth translates directly into earnings and we’ve seen earnings continue to surprise on the upside,” Hoedt said. 

    Hoedt pointed to the direction of the 12-month forward earnings estimate for the S&P 500 as an important indicator. “As long as the direction of the 12-month forward earnings number for the S&P 500 is going up, it’s really, really difficult to be bearish on the stock market,” he said. “It seems to me that we may start to see another inflection higher in forward earnings revisions that take into account this stronger growth environment that we’re in.” 

    However, the broadening of the stock-market rally and the bullish sentiment were also driving some on Wall Street to believe stocks are overbought and due for a correction. 

    Lee said there’s still too much pessimism out there and too much concern that some investors haven’t chased the market yet. “In the second half of this year, when the Fed does stop raising rates and if the economy stays out of recession, you can see major money — trillions of dollars moving from the money market into equities and other risk assets,” he told MarketWatch in a phone interview on Friday.

    “When that happens, it’s probably going to push valuations even further. So I would imagine when that happens is when you can expect more of a correction to occur, but I think that we still have more room to go before that happens.” 

    U.S. stocks ended higher on Monday, finishing up July on a positive note. Three major stock indexes rallied this month, with the S&P 500 up 3.1% and booking its fifth monthly gain. The tech-heavy Nasdaq Composite
    COMP,
    +0.21%

    gained 4.1% month to date, while the Dow Jones Industrial Average
    DJIA,
    +0.28%

    advanced 3.4%, according to Dow Jones Market Data. 

    [ad_2]

    Source link

  • Google, Microsoft, and Meta can’t stop talking about A.I. — here’s why Apple rarely mentions it

    Google, Microsoft, and Meta can’t stop talking about A.I. — here’s why Apple rarely mentions it

    [ad_1]

    Apple CEO Tim Cook arrives for an official State Dinner in honor of India’s Prime Minister Narendra Modi, at the White House in Washington, DC, on June 22, 2023. 

    Stefani Reynolds | AFP | Getty Images

    The most powerful technology companies simply cannot stop talking about artificial intelligence, and in particular, the “generative AI” flavor that can create human-like text, images, and code.

    During calls after this week’s earnings reports, Alphabet CEO Sundar Pichai and his team said “AI” 66 times. Microsoft CEO Satya Nadella and his execs said it 47 times. And on Wednesday, Meta CEO Mark Zuckerberg and the Facebook executive team said the magic phrase 42 times, according to a CNBC analysis of transcripts.

    But Apple barely talks about artificial intelligence, and you shouldn’t expect to hear much about it during the company’s earnings next week.

    Its sober approach to the new technology contrasts deeply with its rivals, which are stoking excitement and elevating expectations every chance they get.

    During May’s Apple earnings call, CEO Tim Cook only said “AI” twice, and that was in response to a question. During Apple’s two-hour software launch event in June, it never said the phrase, although it announced several new features powered with AI.

    Apple execs instead use the phrase “machine learning,” which is more popular with academics and practitioners. Apple execs also prefer to talk about what software does for the user, such as organizing their photos, improving their typing, or filling out fields in a PDF, as opposed to the technology that makes all that possible.

    Apple’s approach to AI as a core underlying component instead of the future of computing represents a way to present the technology to its consumers. Apple’s AI works in the background. And the company doesn’t yell about it the way some of the other companies do because it doesn’t need to.

    Microsoft, Google and Meta are rallying everyone around AI, even though the future is murky

    Google launched Bard AI, it’s own chatbot to rival Microsoft and OpenAI’s ChatGPT.

    Jonathan Raa | Nurphoto | Getty Images

    A closer look at executive remarks this week from earnings calls shows that while Meta, Microsoft, and Google are eager to sell the shovels for the AI gold rush, such as cloud services and developer tools, it’s still unclear how AI could change their most important products and when it could start bolstering balance sheets.

    Google, for example, has announced its plans to revamp its search engine using an AI model called Search Generative Experience. Microsoft’s biggest new initiative is a $30-per-month “Copilot” subscription that integrates generated text or code from partner OpenAI’s ChatGPT into Word, Powerpoint, and other apps. Meta’s most recent investment in AI technology is its own large language model it calls LLaMA, which could underpin new kinds of social media chatbots or automatically generate online ads.

    Meanwhile, Apple still makes the bulk of its money from iPhones, which generated $51.3 billion of its $94.84 billion in revenue during the company’s second fiscal quarter. Why talk a big AI game?

    Besides, mega-cap tech companies signaled to investors earlier this week in earnings calls that the rollout of AI products could take a while.

    In Microsoft’s case, Nadella tempered investor expectations for Copilot, signaling that growth would take time, and CFO Amy Hood said that its rollout would be “gradual.”

    It could take until next year before investors understand how the Copilot subscription affects the company’s revenue. “In the second half of the next fiscal year, we’ll start getting some of the real revenue signal from it,” Nadella said.

    Google and Pichai say that the company’s text-generating AI models will make its search engine better and could even answer questions that normal Google search can’t. From a business perspective, Pichai said, generative AI used for creating and serving ads will “supercharge” the company’s existing ads business, and there are “opportunities” for new kinds of ads with AI-generated search.

    But Pichai still said it’s still “early days” for the new AI-powered search, and later, when pressed about how SGE might increase usage of the search engine, and therefore increase revenue, he said the company was experimenting.

    “I think we are definitely headed in the right direction, and we can see it in our metrics and the feedback we’re getting from our users as well,” Pichai said.

    Zuckerberg was effusive about AI technology and its applications in virtual reality, ad targeting, and recommending content from accounts users don’t follow.

    He was particularly optimistic about a concept called “AI agents,” where software would be able to message business customers automatically without a human involved, or act as a coach, or be a personal assistant.

    Still, Zuckerberg admitted he didn’t know how many people would use the new AI features.

    “The reality is, we just don’t know how quickly these will scale,” Zuckerberg said. He said Meta was debating internally how much it should spend on servers for AI.

    The peak of the hype cycle

    Microsoft – Bing seen on mobile with ChatGPT4 on screen, seen in this photo illustration. On 12 March 2023 in Brussels, Belgium.

    Jonathan Raa | Nurphoto | Getty Images

    The slow rollout of revenue-generating AI products from Big Tech matters because many people in the technology industry believe that new foundational technologies go through a “hype cycle” based on research from analysis firm Gartner.

    When a new technology is introduced, according to the hype cycle model, it gains lots of attention and investment as it reaches a “peak of inflated expectations.” But, as the deployment of the tech moves slower than initially expected, enthusiasm and investment dry up, in a “trough of disillusionment,” before maturing and becoming productive.

    For now, shovel-makers and people seeking investment capital are benefiting from the AI boom. Nvidia stock has risen 220% so far in 2023 as investors have realized its GPUs are essential for the technology. Venture capital investment in AI startups has boomed, and many of those dollars are going to Nvidia for computer capacity, and to cloud providers for access to AI models.

    But if everyday consumer applications for AI don’t catch on, then many AI companies could slip into the trough of disillusionment again. Analysts found earlier this month, for example, that downloads for OpenAI’s iPhone app slowed earlier this month after launching in May.

    Some analysts are starting to understand that an investment opportunity based on new AI products won’t be immediate and that the costs could stack up.

    “We cautioned investors that that process of translating early demand to large-scale implementations and recognized revenue will be a multi-year trend rather than an instantaneous flip of a switch,” JPMorgan analyst Mark Murphy wrote this week.

    “We recommend investors invest elsewhere until Metaverse, Reels, Threads, Quest and Generative AI investments become accretive (if ever) to META’s [return on invested capital], rather than dilutive,” Needham’s Laura Martin wrote in a note.

    UBS analyst Lloyd Walmsley wrote this week that Generative AI was still an “overhang” over Google.

    “Management expressed optimism around the ability to solve for ‘deeper and broader’ use cases with Search Generative Experience (SGE), but we do not believe the company is out of the woods with management still describing monetization as having a ‘number of experiments in flight including (for) ads,’” Walmsley wrote.

    Apple’s a product company

    Apple iPhones are displayed at an Apple store in Chicago on Nov. 28, 2022.

    Scott Olson | Getty Images

    When Apple reports its earnings next week, analysts will likely press it on its plans for AI, given the industry-wide obsession, and especially after a recent Bloomberg report that said the company was developing a ChatGPT-like language model internally.

    Last month, Apple announced new iPhone keyboard software that uses the same transformers architecture as GPT, showing that it has substantial internal development of AI models. It just doesn’t like to talk about products that aren’t out on the market yet to stoke investor anticipation.

    Apple is unlikely to discuss AI at length next week as its mega-cap rivals did this week. During Apple’s earnings call in May, when asked about the technology, Cook quickly moved the conversation back to the company’s products and features.

    “We view AI as huge and we’ll continue weaving it in our products on a very thoughtful basis,” Cook said.

    [ad_2]

    Source link