ReportWire

Tag: Meta

  • How Meta’s Link Limit in Facebook Posts Will Cost Small Business Marketers

    [ad_1]

    Marketing products on Facebook is about to become more expensive for influencers, content creators, and companies. The social platform’s parent company, Meta, informed members with business accounts that they’ll have to start paying if they want to send more than two links per month to customers and followers through the site.

    The good news for entrepreneurs and small companies promoting their businesses on Facebook is that Meta’s move to limit links on organic posts is currently just a test. The bad news is there’s a better than fair chance the tech giant will not only make the two-free-monthly-links-policy permanent, but possibly extend it to its other social platforms like Instagram. The reason? The trial restriction reflects Meta’s ongoing efforts to wring as much profit from its various business units as possible.

    The alert sent to Facebook business account holders noted that the only way to avoid the link limitation is to “(s)ubscribe to Meta Verified” for the monthly fee of $14.99. That premium option already offers users a badge vouching for their company’s legitimacy, and also provides protective measures against fraudsters impersonating them.

    Several media reports have quoted Meta officials stressing the trial nature of the link limitation. Social media expert Matt Navarra was among the first people to alert other business account holders to the change, and offered Meta’s reasoning behind it.

    “This is a limited test to understand whether the ability to publish an increased volume of posts with links add additional value for Meta Verified subscribers,” Navarra wrote in a Facebook post, in which he initially seemed to try calming any fears the expensive update will remain in place for good. “This isn’t enforcement or a platform-wide rule change — it’s a small, controlled test.”

    But in subsequent posts, Navarra changed his tone, noting Meta’s continued quest to monetize as many aspects of its social media platforms as possible. Those reminders were unlikely to have allayed his readers’ fears that the current trial forcing Facebook business account holders to subscribe to Meta Verified isn’t the next step in the company’s profit-enhancing process.

    “(I)t does reinforce a broader direction,” Navarra acknowledged. “Meta Verified is increasingly being treated as a trust layer, not just a badge. If this expands, it would mark a meaningful shift.”

    In some ways, it already does.

    Not only will Facebook business account holders be limited to two monthly free links in their messages — which most use to drive followers or customers to their content. In its restriction notice, meanwhile, Meta underlined that those two freebies should be used on the very first day of each month, because “unused posts won’t be rolled over” for use later on.

    Even in the restriction’s current test, Navarra noted, creators and small business marketers are effectively watching their unlimited link publishing capabilities being placed behind the Meta Verified paywall.

    “This isn’t really about verification as much as about bundling survival features behind a subscription,” Navarra told the BBC. “If you’re a creator or a business, I think the message is essentially if Facebook is a part of your growth or traffic strategy, that access now has a price tag attached to it… And that’s new in its explicitness, even if it’s been the direction of travel for a while.”

    In other words, don’t be shocked if the current test becomes a permanent rule — and starts migrating to other Meta social platforms. Anticipating that, Navarra offered affected entrepreneurs a valuable communications reminder.

    “Tests like this underline why building a business that’s overly dependent on any one platform’s goodwill is incredibly risky,” he told the broadcaster, saying this kind of squeezing will likely increase over time. “For creators it reinforces a pretty brutal reality that Facebook is no longer a reliable traffic engine and Meta is increasingly nudging it away from people trying to use it as one.”

    Go inside one interesting founder-led company each day to find out how its strategy works, and what risk factors it faces. Sign up for 1 Smart Business Story from Inc. on Beehiiv.

    [ad_2]

    Bruce Crumley

    Source link

  • Apple’s Leadership Exodus Isn’t a Crisis. It’s Just Smart Transition Planning

    [ad_1]

    At a normal company, people come and go. Top executives leave and move on to other roles and companies. Lower-level employees find a better job and post a “life update” on Threads. It’s a pretty, well, normal thing that happens all the time.

    Apple, on the other hand, seems to enjoy a remarkable level of stability in this regard. Obviously, Apple employs a lot of people, and I’m sure a lot of them are looking for a new job at any given time. Many of the people on the iPhone maker’s leadership page, however, have been there for a decade or more. Turnover at the top—with a few exceptions—is rare.

    Partly that’s because the company’s history is one long case study in slow, deliberate succession. When Steve Jobs handed the CEO role to Cook in 2011, it wasn’t a surprise to anyone inside the company. The groundwork had been laid for years, and Cook had already stepped in as interim CEO once before.

    Now, however, we’ve seen a handful of departures over the past few weeks, and some see it as a sign that there’s something wrong. First, there were reports that Tim Cook plans to step down in early 2026. Then, Jeff Williams, who had been Chief Operating Officer since 2015, retired. Alan Dye, the head of human interface design, left for Meta. John Giannandrea is leaving, as are Lisa Jackson and Kate Adams. And, former CFO, Luca Maestri, retired at the beginning of 2025.

    Then there were the rumors that Apple’s chip chief, Johnny Srouji, was looking to exit, though it seems that reporting may have been premature. Srouji told his staff he wasn’t “planning to leave any time soon,” but someone gave the idea to Bloomberg, who reported that he had been in conversations about going elsewhere.

    Even if Srouji isn’t going anywhere, the collective exodus is hard to ignore. After all, if that many people are leaving, something must be up, right?

    Maybe. On the other hand, the fact that a number of people are leaving doesn’t mean there’s something wrong. I’d argue it’s actually pretty normal. In fact, I think it makes perfect sense, especially if it’s true that Cook is planning to retire in the next 12 to 18 months. In that case, this looks like the change is probably the result of a CEO saying to everyone working for him that this is the time to get out if you’re going to go.

    The extended deadline for the 2026 Inc. Regionals Awards is Friday, December 19, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Jason Aten

    Source link

  • Facebook is testing a link posting limit for professional accounts and pages | TechCrunch

    [ad_1]

    In a new experiment, Meta is limiting the number of links users can post on Facebook, unless they have a paid Meta Verified subscription.

    Over the last week, several users have spotted Meta’s test, which impacts link posting. Social media strategist Matt Navvara noted that users part of the test can only post two links unless they pay for a Meta Verified subscription, which starts from $14.99 per month.

    According to the screenshot posted by Navarra, users can still post affiliate links, comments, and links to Meta platform posts, including Facebook, Instagram, and WhatsApp.

    The company confirmed the test to TechCrunch and said it impacts those people using professional mode and Facebook Pages. Professional mode lets you convert your personal profile into a creator profile while making your content eligible for discovery by a wider audience.

    “This is a limited test to understand whether the ability to publish an increased volume of posts with links adds additional value for Meta Verified subscribers,” a Meta spokesperson told TechCrunch.

    This would directly impact creators and brands posting links from their blogs or other platforms to reach a wider audience.

    The company said it is trying to learn how it can add more value to Meta Verified subscribers, and this test is one such experiment to enhance that paid plan. The company added that, at the moment, publishers are not included in this test. It also said that users can still post links in comments, and they are not impacted by the limit.

    In its transparency report for Q3, Meta said that more than 98% views on the feed in the U.S. come from the posts that don’t have any links. It is not clear if this signal pushed the company to experiment with limits on link sharing, however. The company said that the majority of the 1.9% of views of posts with links came from a page they followed. Linked posts shared by friends and groups were minimal.

    Image Credits: Meta

    The same report noted that YouTube and TikTok, along with GoFundMe, were the top domains amid the links posted. With the new link posting limit test, creators and brands would be forced to post content from other Meta platforms if they reached their limit, or stop posting altogether if they didn’t want to pay for a subscription.

    As AI has taken over the internet, there is an ever-raging debate about the link-based web. AI summary and search have impacted the publishing industry negatively. In the past few years, social networks like X have toyed with demoting linked posts to encourage users to post content on the platforms natively.

    [ad_2]

    Ivan Mehta

    Source link

  • Usher joins Carversations premiere to discuss social media and parenting

    [ad_1]

    Thursday evening at Pullman Yards drew a crowd of adults, parents, and children, from grade-schoolers to teens, for the premiere of Carversations, Instagram’s new series that focuses on fostering honest conversations between parents and teens about social media and online life. The series is designed to make these discussions feel more approachable.

    Grammy-winning singer Usher was the featured guest of the night, joining a panel with his sons, Naviyd Ely Raymond and Usher Raymond V. The discussion, moderated by radio personality Kenny Burns, focused on the challenges families face in a world increasingly influenced by social media.

    Grammy-winning singer Usher was the featured guest of the night, joining a panel with his sons, Naviyd Ely Raymond and Usher Raymond V. Photo by Tabius McCoy/The Atlanta Voice

    Usher shared his perspective on parenting and the lessons he has learned along the way. “The hardest conversation I have to have with my kids is the opinions of others—they don’t define you,” he said. “You can learn from them [his kids] and not let the trauma of our past determine how they turn out.” His sons also contributed, offering their own experiences growing up in the prime of social media.

    Photo by Tabius McCoy/The Atlanta Voice

    Kristin Hendrix, Meta’s VP of Strategic Partnerships for Trust and Safety, explained the purpose behind Carversations, saying the series is designed to help parents navigate online safety, screen time, and digital boundaries alongside their teens. By having these conversations side by side, sometimes even in a car, parents and teens can address topics that might otherwise be difficult to start. “Technology isn’t going anywhere,” Hendrix said, “and these conversations shouldn’t either.” She highlighted Meta’s family-focused tools, including teen accounts with default privacy settings, parent supervision features, and content moderation options for younger users, which aim to give parents insight into their teens’ online habits while still allowing teens space to explore safely.

    Photo by Tabius McCoy/The Atlanta Voice

    The first episode also underscored that these conversations are ongoing. Parents had the opportunity to see how their teens view the digital world, and teens could share their perspectives without fear of judgment. Hendrix added that seeing figures like Usher engage with their children openly can serve as an example and encourage families to have similar discussions at home.

    Overall, Carversations provided attendees with insight into bridging generational gaps in the digital age. By showing how parents and teens can communicate openly about social media and online life, the series demonstrated that these talks, while sometimes challenging, can be constructive and meaningful later on.

    Photo by Tabius McCoy/The Atlanta Voice

    [ad_2]

    Tabius McCoy, Report for America Corp Member

    Source link

  • This entrepreneur’s product went viral on TikTok. Scammers quickly swooped in.

    [ad_1]

    Michelle Mildred is the proud entrepreneur behind the company Coloring Your Own. She’s not the owner of a company called “Flolyed Shop,” which is just one of the many sites posting fake ads using her face and voice. 

    The single mother says the ads are promoting products that look like hers, and sending customers to scam sites overseas.

    “I oscillate between like, ‘I can hang on until this ends,’ and then, ‘I don’t know how much more I can take,’” Mildred said.  

    She says some customers are getting counterfeit products when ordered from scam sites, and some aren’t. If they are, they’re much lower quality.

    “You can see the print is really glitchy,” she said while showing WCCO a knockoff one of her customers unknowingly purchased.  

    It all started after she posted a product to TikTok in September that went viral.

    “Within 36 hours there were fraudulent videos on Amazon, and then Walmart, Temu,” she said.

    Mildred individually reported the sponsored ads on TikTok, Facebook and Instagram.

    “I did hire an intellectual property firm. They’ve taken down 175 listings, but I’ve reported over 750 and it takes them a while to get up and running,” she said.

    It’s an effort costing her nearly $2,000 a month out-of-pocket, and endless back-and-forth conversations.

    “I have to bring this to Facebook and be like, ‘Hey, turn off this revenue stream for you because it’s causing damage to my small business,’” she said.

    Mildred is now taking steps to watermark her videos, website and urging you to watch out, too.

    “I didn’t pay myself for four years,” she said. “I don’t know what the future looks like.”

    Mildred says these are ways you can best protect yourself:

    • If you see something advertised on social media, click on the page itself to see who’s running the ad and their reviews.
    • Go to the website and see what other items are offered, and if they look AI-generated.
    • Search the website in Google and write “scam or fraud” and look at the products on Trust Pilot.

    WCCO has reached out to Meta and TikTok for comment.

    [ad_2]

    Frankie McLister

    Source link

  • Time’s 2025 Person of the Year goes to “the architects of AI”

    [ad_1]

    Time magazine is spotlighting key players in the artificial intelligence revolution for its 2025 Person of the Year, the magazine announced Thursday. “The architects of AI” are the latest recipients of the designation, which for more than a century has been given out on an annual basis to an influential person, group of people or, occasionally, a defining cultural theme or idea. 

    Previous Person of the Year title-holders have held varying roles in a vast range of occupations, with President Trump taking last year’s cover and Taylor Swift capturing the one before. In 2025, 

    Time’s 2025 honorific was given to the minds and financiers behind AI’s rise to renown and notoriety, including Nvidia CEO Jensen Huang, Softbank CEO Masayoshi Son and Baidu CEO Robin Li, who spoke directly with the magazine for its feature story.

    “Person of the Year is a powerful way to focus the world’s attention on the people that shape our lives,” wrote Sam Jacobs, Time’s editor-in-chief, in an editorial piece about the magazine’s decision. “And this year, no one had a greater impact than the individuals who imagined, designed, and built AI.”

    Jacobs described 2025 as “the year when artificial intelligence’s full potential roared into view, and when it became clear that there will be no turning back or opting out,” adding: “Whatever the question was, AI was the answer.”

    The magazine prepared two separate covers for the issue. In one, artist Jason Seiler painted an interpretative recreation of the iconic 1932 photograph “Lunch Atop a Skyscraper,” an image that depicted workers seated side-by-side on a steel beam hanging high above New York City during the construction of 30 Rockefeller Plaza, which became a symbol of American resilience during the Great Depression. 

    A cast of tech industry characters at the forefront of AI development are perched on the beam in Seiler’s recreation. Mark Zuckerberg, of Meta, Lisa Su, of Advanced Micro Devices, Elon Musk, of xAI, Sam Altman, of Open AI, Demis Hassabis, of DeepMind Technologies, Dario Amodei, of Anthropic, and Fei-Fei Li, of Stanford’s Human-Centered AI Institute, are all pictured, along with Huang. 

    The second cover illustration, by artist Peter Crowther, places the same executives among scaffolding at what looks like a construction site for the giant letters “AI.”

    From left, cover art by Jason Seiler and Peter Crowther for TIME’s 2025 Person of the Year magazine spread.

    Jason Seiler/TIME; Peter Crowther/TIME


    “Every industry needs it, every company uses it, and every nation needs to build it,” Huang said of balancing the pressures to implement AI responsibly and deploy it to the public as quickly as possible. “This is the single most impactful technology of our time.”  

    Most of the industry figures pictured on Time’s cover did not speak to the magazine for the story, so this year’s spread mainly focuses on the implications — positive, negative and in between — of the companies they have built and the technology they continue forging. 

    AI often took center stage in 2025 in investigative news reports, economic and academic studies, and in Washington, D.C., as policymakers grappled with how to regulate its evolution while tech giants scrambled to trump their competitors’ inventions, as the use of some of them, like chatbots, grew to be commonplace, at times with tragic consequences.

    “For these reasons, we recognize a force that has dominated the year’s headlines, for better or for worse,” Jacobs wrote in his editorial. “For delivering the age of thinking machines, for wowing and worrying humanity, for transforming the present and transcending the possible, the Architects of AI are TIME’s 2025 Person of the Year.”

    [ad_2]

    Source link

  • Nearly a third of American teens interact with AI chatbots daily, study finds

    [ad_1]

    New York (CNN) — Nearly a third of US teenagers say they use AI chatbots daily, a new study finds, shedding light on how young people are embracing a technology that’s raised critical safety concerns around mental health impacts and exposure to mature content for kids.

    The Pew Research Center study, which marks the group’s first time surveying teens on their general AI chatbot use, found that nearly 70% of American teens have used a chatbot at least once. And among those who use AI chatbots daily, 16% said they did so several times a day or “almost constantly.”

    AI chatbots have been pitched as learning and schoolwork tools for young people, but some teens have also turned to them for companionship or romantic relationships. That’s contributed to questions about whether young people should use chatbots in the first place. Some experts have worried that their use even in a learning context could stunt development.

    Pew surveyed nearly 1,500 US teens between the ages of 13 and 17 for the report, and the pool was designed to be representative across gender, age, race and ethnicity, and household income.

    ChatGPT was by far the most popular AI chatbot, with more than half of teens reporting having used it. The other top players were Google’s Gemini, Meta AI, Microsoft’s Copilot, Character.AI and Anthropic’s Claude, in that order.

    A nearly equal proportion of girls and boys — 64% and 63%, respectively — say they’ve used an AI chatbot. Teens ages 15 to 17 are slightly more likely (68%) to say they’ve used chatbots than those ages 13 to 14 (57%). And usage increases slightly as household income goes up, the survey found.

    Just shy of 70% of Black and Hispanic teens say they’ve used an AI chatbot, slightly higher than the 58% of White teens who say the same.

    The findings come after two of the major AI firms, OpenAI and Character.AI, have faced lawsuits from families who alleged the apps played a role in their teens’ suicides or mental health issues. OpenAI subsequently said it would roll out parental controls and age restrictions. And Character.AI has stopped allowing teens to engage in back-and-forth conversations with its AI-generated characters.

    Meta also came under fire earlier this year after reports emerged that its AI chatbot would engage in sexual conversations with minors. The company said it had updated its policies and next year will give parents the ability to block teens from chatting with AI characters on Instagram.

    At least one online safety group, Common Sense Media, has advised parents not to allow children under 18 to use companion-like AI chatbots, saying they pose “unacceptable risks” to young people.

    Some experts have also raised concerns that the use of AI for schoolwork could encourage cheating, although others say the technology can provide more personalized learning support.

    Meanwhile, AI companies have pushed to get their chatbots into schools. OpenAI, Microsoft and Anthropic have all rolled out tools for students and teachers. Earlier this year, the companies also partnered with teachers unions to launch an AI instruction academy for educators.

    Microsoft, in particular, has sought to position its Copilot as the safest choice for parents, with AI CEO Mustafa Suleyman telling CNN in October that it will never allow romantic or sexual conversations for adults or children.

    [ad_2]

    Clare Duffy and CNN

    Source link

  • Cetera Investment Advisers Purchases 26,700 Shares of Meta Platforms, Inc. $META

    [ad_1]

    Cetera Investment Advisers raised its holdings in Meta Platforms, Inc. (NASDAQ:METAFree Report) by 5.2% during the 2nd quarter, according to its most recent disclosure with the Securities & Exchange Commission. The firm owned 536,647 shares of the social networking company’s stock after buying an additional 26,700 shares during the period. Meta Platforms comprises approximately 0.7% of Cetera Investment Advisers’ investment portfolio, making the stock its 19th biggest holding. Cetera Investment Advisers’ holdings in Meta Platforms were worth $396,094,000 at the end of the most recent quarter.

    Several other hedge funds and other institutional investors also recently made changes to their positions in the business. Kingstone Capital Partners Texas LLC boosted its holdings in shares of Meta Platforms by 608,429.2% during the 2nd quarter. Kingstone Capital Partners Texas LLC now owns 59,775,823 shares of the social networking company’s stock worth $44,119,937,000 after buying an additional 59,766,000 shares during the period. Geode Capital Management LLC raised its holdings in Meta Platforms by 1.3% in the 2nd quarter. Geode Capital Management LLC now owns 51,575,209 shares of the social networking company’s stock valued at $37,902,948,000 after acquiring an additional 682,768 shares during the period. Invesco Ltd. lifted its position in Meta Platforms by 2.3% during the first quarter. Invesco Ltd. now owns 17,669,795 shares of the social networking company’s stock worth $10,184,163,000 after acquiring an additional 400,927 shares during the last quarter. Goldman Sachs Group Inc. lifted its position in Meta Platforms by 8.8% during the first quarter. Goldman Sachs Group Inc. now owns 15,575,962 shares of the social networking company’s stock worth $8,977,361,000 after acquiring an additional 1,255,546 shares during the last quarter. Finally, UBS AM A Distinct Business Unit of UBS Asset Management Americas LLC boosted its holdings in shares of Meta Platforms by 4.5% during the first quarter. UBS AM A Distinct Business Unit of UBS Asset Management Americas LLC now owns 12,543,468 shares of the social networking company’s stock worth $7,229,553,000 after acquiring an additional 536,160 shares during the period. 79.91% of the stock is currently owned by institutional investors.

    Insider Buying and Selling

    In related news, Director Robert M. Kimmitt sold 600 shares of Meta Platforms stock in a transaction on Monday, November 17th. The shares were sold at an average price of $609.35, for a total transaction of $365,610.00. Following the completion of the transaction, the director owned 7,347 shares of the company’s stock, valued at $4,476,894.45. This trade represents a 7.55% decrease in their ownership of the stock. The sale was disclosed in a filing with the Securities & Exchange Commission, which is available at the SEC website. Also, COO Javier Olivan sold 517 shares of the company’s stock in a transaction on Monday, November 17th. The shares were sold at an average price of $604.23, for a total transaction of $312,386.91. Following the sale, the chief operating officer directly owned 15,302 shares in the company, valued at $9,245,927.46. This trade represents a 3.27% decrease in their position. The SEC filing for this sale provides additional information. Insiders have sold a total of 40,923 shares of company stock valued at $26,126,437 over the last quarter. 13.61% of the stock is currently owned by company insiders.

    Analyst Ratings Changes

    A number of brokerages have recently issued reports on META. TD Cowen reduced their target price on shares of Meta Platforms from $875.00 to $810.00 and set a “buy” rating for the company in a research report on Thursday, October 30th. Benchmark cut shares of Meta Platforms from a “buy” rating to a “hold” rating in a research note on Thursday, October 30th. HSBC raised shares of Meta Platforms from a “hold” rating to a “buy” rating and boosted their price objective for the company from $610.00 to $900.00 in a report on Thursday, July 31st. Scotiabank raised their target price on shares of Meta Platforms from $675.00 to $685.00 and gave the stock a “sector perform” rating in a report on Thursday, July 31st. Finally, Loop Capital reiterated a “buy” rating and issued a $980.00 price target (up from $888.00) on shares of Meta Platforms in a research report on Tuesday, August 5th. Three equities research analysts have rated the stock with a Strong Buy rating, thirty-nine have issued a Buy rating and eight have given a Hold rating to the company’s stock. Based on data from MarketBeat.com, the company presently has an average rating of “Moderate Buy” and a consensus target price of $823.93.

    View Our Latest Stock Analysis on META

    Meta Platforms Trading Up 3.8%

    NASDAQ META opened at $636.22 on Wednesday. The company has a market cap of $1.60 trillion, a price-to-earnings ratio of 28.10, a price-to-earnings-growth ratio of 1.28 and a beta of 1.20. Meta Platforms, Inc. has a twelve month low of $479.80 and a twelve month high of $796.25. The company has a quick ratio of 1.98, a current ratio of 1.98 and a debt-to-equity ratio of 0.15. The business’s fifty day moving average is $691.78 and its 200 day moving average is $706.84.

    Meta Platforms (NASDAQ:METAGet Free Report) last posted its earnings results on Wednesday, October 29th. The social networking company reported $7.25 EPS for the quarter, topping the consensus estimate of $6.74 by $0.51. Meta Platforms had a net margin of 30.89% and a return on equity of 39.35%. The business had revenue of $51.24 billion for the quarter, compared to the consensus estimate of $49.34 billion. During the same period last year, the company earned $6.03 EPS. The firm’s revenue was up 26.2% compared to the same quarter last year. Meta Platforms has set its Q4 2025 guidance at EPS. As a group, sell-side analysts expect that Meta Platforms, Inc. will post 26.7 EPS for the current fiscal year.

    Meta Platforms Dividend Announcement

    The firm also recently declared a quarterly dividend, which was paid on Monday, September 29th. Stockholders of record on Monday, September 22nd were issued a dividend of $0.525 per share. This represents a $2.10 dividend on an annualized basis and a dividend yield of 0.3%. The ex-dividend date was Monday, September 22nd. Meta Platforms’s payout ratio is 9.28%.

    About Meta Platforms

    (Free Report)

    Meta Platforms, Inc engages in the development of products that enable people to connect and share with friends and family through mobile devices, personal computers, virtual reality headsets, and wearables worldwide. It operates in two segments, Family of Apps and Reality Labs. The Family of Apps segment offers Facebook, which enables people to share, discuss, discover, and connect with interests; Instagram, a community for sharing photos, videos, and private messages, as well as feed, stories, reels, video, live, and shops; Messenger, a messaging application for people to connect with friends, family, communities, and businesses across platforms and devices through text, audio, and video calls; and WhatsApp, a messaging application that is used by people and businesses to communicate and transact privately.

    Featured Articles

    Want to see what other hedge funds are holding META? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for Meta Platforms, Inc. (NASDAQ:METAFree Report).

    Institutional Ownership by Quarter for Meta Platforms (NASDAQ:META)



    Receive News & Ratings for Meta Platforms Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for Meta Platforms and related companies with MarketBeat.com’s FREE daily email newsletter.

    [ad_2]

    ABMN Staff

    Source link

  • Mark Zuckerberg’s Net Worth Drops As Meta’s AI Plan Spooks Investors

    [ad_1]

    Mark Zuckerberg fell to fifth place on the Bloomberg Billionaires Index — the lowest in nearly two years — as investors spooked by Meta Platforms Inc’s planned $30 billion debt sale sent the company’s shares spiraling amid a flurry of tech earnings shaking up the ranks of the world’s richest.

    Meta’s stock fell 11% — the most since 2022 — after the company said it was going to issue the biggest investment-grade bond offering of the year to boost spending on artificial intelligence research, dropping Zuckerberg’s net worth to $235.2 billion, according to the wealth index.

    He was leapfrogged by Amazon.com Inc.’s Jeff Bezos and Alphabet Inc.’s Larry Page, who hadn’t been among the four-richest people since October 2023. Alphabet’s shares climbed 2.5% after it reported revenue that beat analysts’ expectations amid a surge in demand for its cloud and AI services.

    READ: Mark Zuckerberg vs Mark Zuckerberg: The Legal Battle Over A Name

    Zuckerberg’s $29.2 billion drop was the fourth-largest one-day market-driven decline ever recorded by Bloomberg’s wealth index.

    Meta’s stock had gained 28% this year before Thursday’s swoon, adding $57 billion to Zuckerberg’s fortune. But doubts over Meta’s ballooning AI budget gave investors pause, with at least two analysts downgrading the company’s shares after it said it expected to spend up to $118 billion in capital expenditures this year and possibly more in 2026.

    Amazon shares have gained more than 30% since a mid-April low. Investors have cheered its cloud-computing unit, which has steadily grown as it has signed splashy deals with AI firms including Anthropic. The company reported third-quarter sales and profit that topped estimates, sending shares surging in after-hours trading. 
     

    (Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)


    [ad_2]

    Source link

  • Those Viral Photos of Elon and Zuck Are AI. But Google Launched a New Way to Check for Fakes

    [ad_1]

    Photos appearing to show Elon Musk and several other Big Tech CEOs have gone viral in the past week on X and Bluesky. The mundane environments, including humble apartments and McDonald’s parking lots, should have given everyone a hint that they’re fake. But there’s a new way for the average person to check for themselves whether the images were made with AI. And it’s actually really useful.

    Right off the bat, it should be said that the vast majority of AI image detectors are not reliable. Many people think you can use tools that are openly available on the web and figure out if a given image is AI. But they’re not good. For example, people often ask Grok on X whether a photo was created with generative artificial intelligence. And it frequently gets the answer wrong. Sometimes in amusing ways.

    Google developed an AI watermark called SynthID a couple of years ago, but the company didn’t allow the average user to check whether an image had the watermark. That changed just a few days ago. Now anyone can upload an image to Gemini and ask if it has the SynthID watermark, which is invisible to the naked eye.

    The watermark is embedded in the pixels and every image created with Google’s AI creation tools will have it. Checking for the watermark is now easy for anyone who opens up Gemini.

    From Google’s announcement:

    If you see an image and want to confirm it has been made by Google AI, upload it to the Gemini app and ask a question such as: “Was this created with Google AI?” or “Is this AI-generated?”

    Gemini will check for the SynthID watermark and use its own reasoning to return a response that gives you more context about the content you encounter online.

    Obviously Gemini is less equipped to tell you if an image is AI if it wasn’t made with Google tools like Nano Banana Pro. And that’s the entire reason the company appears to be launching SynthID detection in Gemini in this moment. Nano Banana Pro launched last week and it’s allowing users to make incredibly realistic images, including images of Elon Musk and other tech CEOs that look very real.

    Some of those images have recently gone viral, like one that racked up nearly 9 million views on X before migrating to other platforms like Bluesky. The image shows Musk, Nvidia CEO Jensen Huang, Google CEO Sundar Pichai, Apple CEO Tim Cook, Amazon founder Jeff Bezos, Microsoft CEO Satya Nadella, and Meta CEO Mark Zuckerberg all standing together in a small apartment.

     

    Other versions of the image include OpenAI CEO Sam Altman, with the men standing around in a parking lot, pictured at the top of this article. For some reason, Musk is seen smoking a cigar in a couple of them. Another image showed the men in the parking lot from a different angle. And still another had the men eating McDonald’s on the ground with a Cybertruck in the background.

    If you run any of these images through Gemini it confirms they all have the SynthID watermark. If you’re wondering whether an image appears too weird to be true, it’s probably a good idea to check with Gemini.

    Did you see that viral image of President Donald Trump with Bill “Bubba” Clinton in a very compromising position? Running that image through Gemini confirms it was made with Google’s AI image generator. Gemini won’t necessarily be able to ID every AI image with certainty. But if you run an image through Gemini and it tells you the “photo” has the SynthID watermark, you know it’s not real.

    Fake images are still going to be everywhere in the current social media environment. But at least Google has given the average user a new tool to identify at least some of the fakes for themselves. It’s only going to get harder and harder to recognize AI-generated content as the years progress. Sometimes you just need to apply some common sense. For example, do you think Elon Musk and Sam Altman would be hanging out in a parking lot together? Given their very public conflicts, that seems very unlikely.

    Then again, it seemed very unlikely that Musk and President Trump would become friendly again after the Tesla CEO accused Trump of being in the Epstein files. Weirder things have happened when billions of dollars are at stake.

    [ad_2]

    Matt Novak

    Source link

  • Meta is bringing usernames to Facebook Groups

    [ad_1]

    Meta has long required Facebook users to post under their real names (with some exceptions), but at least for Facebook Groups, the company is now offering new options. Members of Facebook Groups will now be able to participate under a custom nickname and avatar, rather than being forced to use their real name or post anonymously.

    You can set a custom nickname via the same toggle that lets you create an anonymous post, Meta says. Nicknames have to be enabled by a group’s administrators, and in some cases individually approved, but once they are, you can switch between posting under your real name or a nickname freely. The only other limitation is that the nickname needs to comply with Meta’s existing Community Standards and Terms of Service. While you set your new nickname, you can also pick from a selection of custom avatars, which seem to mostly be pictures of cute animals wearing sunglasses.

    Groups are one of several areas of Facebook that Meta has continually tried to tweak in the last few years to bring back users. In 2024, the company introduced a tab that highlighted local events shared in Facebook groups. More recently, it added tools for admins to convert private groups into public ones to try and draw in new members. No single change can make Facebook the center of young people’s lives in the way it was in the early 2000s, but letting people use what amounts to a username might encourage Facebook users to explore new groups and post more freely.

    [ad_2]

    Ian Carlos Campbell

    Source link

  • Are tech companies training their AI with private data?

    [ad_1]

    Leading tech companies are in a race to release and improve artificial intelligence products, leaving U.S. users to puzzle out how much of their personal data could be extracted to train AI tools.

    Meta (which owns Facebook, Instagram, Threads and WhatsApp), Google and LinkedIn all have rolled out AI app features that have the capacity to draw on users’ public profiles or emails. Google and LinkedIn offer users ways to opt out of the AI features, while Meta’s AI tool provides no means for its users to say no thanks.

    “Gmail just flipped a dangerous switch on October 10, 2025 and 99% of Gmail users have no idea,” a Nov. 8 Instagram post said. 

    Posts warned the platforms’ AI tool rollouts make most private information available for tech company harvesting. “Every conversation, every photo, every voice message, fed into AI and used for profit,” a Nov. 9 X video about Meta said. 

    Technology companies are rarely fully transparent when it comes to the user data they collect and what they use it for, Krystyna Sikora, a research analyst for the Alliance for Securing Democracy at the German Marshall Fund, told PolitiFact.

    “Unsurprisingly, this lack of transparency can create significant confusion that in turn can lead to fear mongering and the spread of false information about what is and is not permissible,” Sikora said.

    The best — if tedious — way for people to know and protect their privacy rights is to read the terms and conditions, since it often explicitly outlines how the data will be used and whether it will be shared with third parties, Sikora said. The U.S. doesn’t have any comprehensive federal laws on data privacy for technology companies.

    Here’s what we learned about how each platform’s AI is handling your data:

    Meta

    Social media claim: “Starting December 16th Meta will start reading your DMs, every conversation, every photo, every voice message fed into AI and used for profit.” — Nov. 9 X post with 1.6 million views as of Nov. 19.

    The facts: Meta announced a new policy to take effect Dec. 16, but that policy alone does not result in your direct messages, photos and voice messages being fed into its AI tool. The policy involves how Meta will customize users’ content and advertisements based on how they interact with Meta AI. 

    For example, if a user interacts with Meta’s AI chatbot about hiking, Meta might start showing that person recommendations for hiking groups or hiking boots.

    But that doesn’t mean your data isn’t being used for AI purposes. Although Meta doesn’t use people’s private messages in Instagram, WhatsApp or Messenger to train its AI, it does collect user content that is set to “public” mode. This can include photos, posts, comments and reels. If the user’s Meta AI conversations involve religious views, sexual orientation and racial or ethnic origin, Meta says the system is designed to avoid parlaying these interactions into ads. If users ask questions of Meta AI using its voice feature, Meta says the AI tool will use the microphone only when users give permission.

    There is a caveat: The tech company says its AI might use information about people who don’t have Meta product accounts if their information appears in other users’ public posts. For example, if a Meta user mentions a non-user in a public image caption, that photo and caption could be used to train Meta AI.

    Can you opt-out? No. If you are using Meta platforms in these ways — making some of your posts public and using the chatbot — your data could be used by Meta AI. There is no way to deactivate Meta AI in Instagram, Facebook or Threads. WhatsApp users can deactivate the option to talk with Meta AI in their chats, but this option is available only per chat, meaning that you must deactivate the option in each chat’s advanced privacy settings.

    The X post inaccurately advised people to submit this form to opt-out. But the form is simply a way for users to report when Meta’s AI supplies an answer that contains someone’s personal information.

    David Evan Harris, who teaches AI ethics at University of California, Berkeley, told PolitiFact that because the U.S. has no federal regulations about privacy and AI training, people have no standardized legal right to opt out of AI training in the way that people in countries such as Switzerland, the United Kingdom and South Korea do.

    Even when social media platforms provide opt out options for U.S. customers, it’s often difficult to find the settings to do so, Harris said. 

    Deleting your Meta accounts does not eliminate the possibility of Meta AI using your past public data, Meta’s spokesperson said.

    Google

    Social media claim: “Did you know Google just gave its AI access to read every email in your Gmail — even your attachments?”  — Nov. 8 Instagram post with more than 146,000 likes as of Nov. 19.

    The facts: Google has a host of products that interact with private data in different ways. Google announced Nov. 5 that its AI product, Gemini Deep Research, can connect to users’ other Google products, including Gmail, Drive and Chat. But, as Forbes reported, users must first give permission to employ the tool.

    Users who want to allow Gemini Deep Research to have access to private information across products can choose what data sources to employ, including Google search, Gmail, Drive and Google Chat.

    There are other ways Google collects people’s data:

    •  Through searches and prompts in Gemini apps, including its mobile app, Gemini in Chrome or Gemini in another web browser

    • Any video or photo uploads the user entered into Gemini 

    • Through interactions with apps such as YouTube and Spotify, if users give permission

    •  Through message and phone calls apps, including call logs and messages logs, if users give permission

    A Google spokesperson told PolitiFact the company doesn’t use this information to train AI when registered users are under age 13. 

    Google can also access people’s data when they have smart features activated in their Gmail and Google Workplace settings (that are automatically on in the U.S.), which give Google consent to draw on email content and user activity data to help users compose emails or suggest Google Calendar events. With optional paid subscriptions, users can access additional AI features, including in-app Gemini summaries. 

    Turning off Gmail’s smart features can stop Google’s AI from accessing Gmail, but it doesn’t stop Google’s access on the Gemini app, which users can either download or access in a browser.

    (Screenshot shows a permission pop-up that appeared in the Gemini app after a PolitiFact reporter asked Gemini to summarize an email. Gemini asked permission to access that email.)

    A California lawsuit accuses Gemini of spying on users’ private communications. The lawsuit says an October policy change gives Gemini default access to private content such as emails and attachments in people’s Gmail, Chat and Meet. Before October, users had to manually allow Gemini to access the private content, now users must go into their privacy settings to disable it. The lawsuit claims the Google policy update violates California’s 1967 Invasion of Privacy Act, a law that prohibits unauthorized wiretapping and recording confidential communications without consent.

    Can you opt-out? If people don’t want their conversations used to train Google AI, they can use “temporary” chats or chat without signing into their Gemini accounts. Doing that means Gemini can’t save a person’s chat history, a Google spokesperson said. Otherwise, opting out of having Google’s AI in Gmail, Drive and Meet requires turning off smart features in settings. 

    LinkedIn

    Social media claim: Starting Nov. 3, “LinkedIn will begin using your data to train AI.” — Nov. 2 Instagram post with more than 18,000 likes as of Nov. 19.

    The facts: LinkedIn, owned by Microsoft, announced on its website that starting Nov. 3, it will use some U.S. members’ data to train content-generating AI models. 

    The data the AI collects includes details from people’s profiles and public content users post.

    The training does not draw on information from people’s private messages, LinkedIn said.

    LinkedIn also said, aside from the AI data access, Microsoft started receiving information about LinkedIn members — such as profile information, feed activity and ad engagement — as of Nov. 3 in order to target users with personalized ads.

    Can you opt-out? Yes. Autumn Cobb, a LinkedIn spokesperson, confirmed to PolitiFact that members can opt out if they don’t want their content used for AI training purposes. They can also opt out of receiving targeted, personalized ads. 

    To remove your data from being used for training purposes, go to data privacy, click on the option that says “Data for Generative AI Improvement” and then turn off the feature that says “use my data for training content creation AI models.”

    And to opt out of personalized ads, go to advertising data in settings, and turn off ads off LinkedIn and the option that says “data sharing with our affiliates and select partners.”

    [ad_2]

    Source link

  • Meta gives Australian kids 2-week warning to delete accounts as world-first social media age restrictions loom

    [ad_1]

    Melbourne, Australia — Technology giant Meta on Thursday began sending thousands of young Australians a two-week warning to downland their digital histories and delete their accounts from Facebook, Instagram and Threads before a world-first social media ban on accounts of children younger than 16 takes effect.

    The Australian government announced two weeks ago that the three Meta platforms plus Snapchat, TikTok, X and YouTube must take reasonable steps to exclude Australian account holders younger than 16, beginning Dec. 10.

    California-based Meta on Thursday became the first of the targeted tech companies to outline how it will comply with the law. Meta contacted thousands of young account holders via SMS and email to warn that suspected children will start to be denied access to the platforms from Dec. 4.

    “We will start notifying impacted teens today to give them the opportunity to save their contacts and memories,” Meta said in a statement.

    Meta said young users could also use the notice period to update their contact information “so we can get in touch and help them regain access once they turn 16.”

    Meta has estimated there are 350,000 Australians aged 13-to-15 on Instagram and 150,000 in that age bracket on Facebook. Australia’s population is 28 million.

    Account holders 16-years-old and older who were mistakenly given notice that they would be excluded can contact Yoti Age Verification and verify their age by providing government-issued identity documents or a “video selfie,” Meta said.

    Terry Flew, co-director of Sydney University’s Center for AI, Trust and Governance, said such facial-recognition technology had a failure rate of at least 5%.

    “In the absence of a government-mandated ID system, we’re always looking at second-best solutions around these things,” Flew told the Australian Broadcasting Corp.

    The government has warned platforms that demanding that all account holders prove they are older than 15 would be an unreasonable response to the new age restrictions. The government maintains the platforms already had sufficient data about many account holders to ascertain they were not young children.

    Social media companies will face fines of up to 50 million Australian dollars (about $33 million) if they are found to be failing to prevent people under 16 from creating accounts on their platforms.

    Meta’s vice president and global head of safety, Antigone Davis, said she would prefer that app stores including Apple App Store and Google Play collect the age information when a user signs up and verifies they are at least 16 year old for app operators such as Facebook and Instagram.

    “We believe a better approach is required: a standard, more accurate, and privacy-preserving system, such as OS/app store-level age verification,” Davis said in a statement.

    “This combined with our investments in ongoing efforts to assure age … offers a more comprehensive protection for young people online,” she added.

    Dany Elachi, founder of the parents’ group Heaps Up Alliance that lobbied for the social media age restriction, said parents should start helping their children plan on how they will spend the hours currently absorbed by social media.

    He was critical of the government’s only announcing on the complete list of platforms that will become age-restricted on Nov. 5.

    “There are aspects of the legislation that we’re not entirely supportive of, but the principle that children under the age of 16 are better off in the real world, that’s something we advocated for and are in favor of,” Elachi said. “When everybody misses out, nobody misses out. That’s the theory. Certainly we expect that it would play out that way. We hope parents are going to be very positive about this and try to help their children see all the potential possibilities that are now open to them.”

    There was significant resistance to the legislation last year, however, including from  some children’s advocacy groups.

    The CEO of the Save the Children charity Mat Tinkler said in a statement a year ago, when the ban was approved by Australian lawmakers, that while he welcomed the government’s efforts to protect children from harm online, the solution should be regulating social media companies, rather than a blanket ban.

    He said the government should “instead use the momentum of this moment to hold the social media giants to account, to demand that they embed safety into their platforms rather than adding it as an afterthought, and to work closely with experts and children and young people themselves to make online spaces safer, as opposed to off-limits.”

    The Australian Human Rights Commission, an independent government body, also expressed “serious reservations” over the law before it was approved, saying last year that there were “less restrictive alternatives available that could achieve the aim of protecting children and young people from online harms, but without having such a significant negative impact on other human rights. One example of an alternative response would be to place a legal duty of care on social media companies.”

    [ad_2]

    Source link

  • This Quest 3S Bundle Is $50 Off and Includes a Game and Gift Card

    [ad_1]

    If you’ve been dreaming of getting into virtual reality but you’ve been holding out for a good deal, this may be your moment. I spotted a Meta Quest 3S bundle at Best Buy that not only knocks $50 off the normal price, but also includes Walking Dead: Saints & Sinners and a $50 Best Buy digital gift card. That’s quite the deal on a product that doesn’t often see major discounts, and you can use that gift card to accessorize your new headset.

    Courtesy of Meta

    Meta’s lineup of stand-alone headsets has slowly improved over the last few years, with frequent updates adding functionality and growing the library of games. You don’t need a computer or console to power them, which makes it easy to just toss the headset on and start playing without any extra steps. With object and hand tracking, sometimes you don’t even need controllers, and the pass-through camera lets you blend the real world and the virtual one for awesome mixed reality experiences.

    While the Quest 3S is the more budget-friendly offering in the current generation of headsets, the compromises aren’t as major as you might be thinking. The screen is slightly lower resolution, and the pass-through isn’t quite as sharp, but otherwise the Quest 3S plays the same games and experiences as the more expensive Quest 3. Both headsets suffer from limited battery life, so don’t expect more than a couple of hours of play at a time.

    I haven’t had a chance to play the included game, Walking Dead: Saints and Sinners, but it’s described as a survival action game set in a zombie-ridden version of New Orleans. You’ll have to make tough decisions about how to deal with other survivors, and it looks like there are plenty of opportunities to slay zombies. My vibe is usually more mini-golf than shotgun-wielding, but the game has overall positive reviews, and scary stuff can be a lot of fun in VR.

    If you’re ready to pull the trigger on this deal, make sure to swing by my guide to the best Meta Quest games. I’ve got some picks over there that can help you calm down after a long day of swinging your axe at zombies.

    [ad_2]

    Brad Bourque

    Source link

  • A Simple WhatsApp Security Flaw Exposed 3.5 Billion Phone Numbers

    [ad_1]

    WhatsApp’s mass adoption stems in part from how easy it is to find a new contact on the messaging platform: Add someone’s phone number, and WhatsApp instantly shows whether they’re on the service, and often their profile picture and name, too.

    Repeat that same trick a few billion times with every possible phone number, it turns out, and the same feature can also serve as a convenient way to obtain the cell number of virtually every WhatsApp user on earth—along with, in many cases, profile photos and text that identifies each of those users. The result is a sprawling exposure of personal information for a significant fraction of the world population.

    One group of Austrian researchers have now shown that they were able to use that simple method of checking every possible number in WhatsApp’s contact discovery to extract 3.5 billion users’ phone numbers from the messaging service. For about 57 percent of those users, they also found that they could access their profile photos, and for another 29 percent, the text on their profiles. Despite a previous warning about WhatsApp’s exposure of this data from a different researcher in 2017, they say, the service’s parent company, Meta, still failed to limit the speed or number of contact discovery requests the researchers could make by interacting with WhatsApp’s browser-based app, allowing them to check roughly a hundred million numbers an hour.

    The result would be “the largest data leak in history, had it not been collated as part of a responsibly conducted research study,” as the researchers describe it in a paper documenting their findings.

    “To the best of our knowledge, this marks the most extensive exposure of phone numbers and related user data ever documented,” says Aljosha Judmayer, one of the researchers at the University of Vienna who worked on the study.

    The researchers say they warned Meta about their findings in April and deleted their copy of the 3.5 billion phone numbers. By October, the company had fixed the enumeration problem by enacting a stricter “rate-limiting” measure that prevents the mass-scale contact discovery method the researchers used. But until then, the data exposure could have also been exploited by anyone else using the same scraping technique, adds Max Günther, another researcher from the university who cowrote the paper. “If this could be retrieved by us super easily, others could have also done the same,” he says.

    In a statement to WIRED, Meta thanked the researchers, who reported their discovery through Meta’s “bug bounty” system, and described the exposed data as “basic publicly available information,” since profile photos and text weren’t exposed for users who opted to make it private. “We had already been working on industry-leading anti-scraping systems, and this study was instrumental in stress-testing and confirming the immediate efficacy of these new defenses,” writes Nitin Gupta, vice president of engineering at WhatsApp. Gupta adds, “We have found no evidence of malicious actors abusing this vector. As a reminder, user messages remained private and secure thanks to WhatsApp’s default end-to-end encryption, and no non-public data was accessible to the researchers.”

    [ad_2]

    Andy Greenberg

    Source link

  • Meta releases a new tool to protect reels creators from having their work stolen | TechCrunch

    [ad_1]

    Facebook creators are getting a new tool to help them protect their work from being ripped off by others. On Monday, Meta introduced Facebook content protection, a mobile tool designed to detect when a creator’s original reels posted to Facebook are being used without their permission.

    If the creator is alerted that someone else is using their reels, they’ll also have the ability to block the reel’s visibility across both Facebook and Instagram or track the reel’s performance and optionally add attribution links to their work.

    Or they can opt to release their claim on the reel, allowing it to remain visible on Meta’s platforms.

    Meta says the addition of the content protection feature is part of its work to help original creators succeed on Facebook, without being drowned out by copycats. As part of this initiative, Meta said in July it had taken down around 10 million profiles that were impersonating large content creators and had taken action against 500,000 accounts engaged in spammy behavior or fake engagement.

    Image Credits:Meta

    Although the new system also works to protect original content that’s posted on Instagram, it requires that creators post their reels to Facebook to have them tracked. This also works if the creator is using the cross-posting option from Instagram to “Share to Facebook.”

    The move could encourage more creators to share their work on Facebook as a result.

    The new content protection system is automatically being provided to Facebook creators in its Facebook Content Monetization program who also meet enhanced integrity and originality standards, the company says. In addition, access to the new program is rolling out to creators who use Rights Manager. 

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    Image Credits:Meta

    Creators can see if they’re eligible by looking for notifications in their Feed, Professional Dashboard, and profile, or they can check to see if they have access to the tool from their Professional Dashboard, under “Content Protection.” They can also apply for access on Facebook’s website.

    To work, the tool uses the same matching technology as is currently used by Meta’s Rights Manager for copyright holders. It will also show the percentage match for each match it surfaces, as well as other details, like views, follower count, and monetization status.

    The company says it’s giving creators control over if and how the system flags matches. For instance, if the creator has given permission to another account to use their content, they can add them to an “allow list” so those duplicate reels aren’t automatically flagged.

    Image Credits:Meta

    Creators can also release their claim on a video on a one-off basis, or, if they choose to track the performance of a reel on another creator’s account, they can opt to add attribution links. These links will add an “original” label to the reel that links back to the creator’s profile, page, or, in some cases, the original reel itself.

    Meta tells us it’s currently running tests for linking back to the original reel, but linking to the page or profile is the default.

    If they block the reel, its distribution is impacted, but the account that stole the reel doesn’t receive any disciplinary action. This could be because Meta doesn’t want the system abused to target specific accounts. In addition, it says that if creators abuse the system, creators submitting false reports could see restrictions against their own accounts or they could lose access to the tool.

    Tracking reels is the default setting, Facebook notes.

    Creators will also be able to dispute instances where another account tries to protect a piece of the creator’s original work. To do so, creators can submit a copyright takedown request through the IP reporting channel. (They can also submit a report if they find a match that the tool didn’t surface, via a “Can’t find a specific match?” option on the content protection overview screen.)

    For the time being, the new tool is mobile-only, but Meta tells TechCrunch it’s testing adding it to the Professional Dashboard on the desktop.

    [ad_2]

    Sarah Perez

    Source link

  • Apple is ramping up succession plans for CEO Tim Cook and may tap this hardware exec to take over, report says | Fortune

    [ad_1]

    Apple’s board of directors and senior executives have been accelerating succession plans for Tim Cook, sources told the Financial Times.

    After serving as CEO for 14 years, Cook may step down as early as next year, the report said.

    Apple’s senior vice president of hardware engineering, 50-year-old John Ternus, is widely seen as the most likely successor, but no final decisions have been made yet, sources told the FT.

    The engineer joined Apple’s product design team in 2001 and has overseen hardware engineering for most major products the tech company has launched ever since, according to Ternus’ LinkedIn profile.

    He has also played a prominent role during Apple’s most recent keynotes, introducing products like the new iPhone Air. Ternus had been rumored to be Cook’s potential successor, according to previous reports

    The company is unlikely to name a new CEO before its next earnings report in late January, and an early-year announcement would allow a new leadership team time to settle in before its annual events, the FT said. 

    The succession preparations have been long-planned and are not related to the company’s current performance, which is expecting strong end-of-year sales, people close to Apple told the FT.

    Apple did not immediately respond to Fortune’s request for comment and declined to provide a comment to the FT.

    The $4 trillion company is expecting year-on-year revenue growth of 10% to 12% for its holiday quarter ending in December, fueled by the release of the iPhone 17 model in September.

    Ternus would take the helm of the tech giant at an important time in its evolution. Although Apple has seen sales success with iPhones and new products like Airpods over the past couple of decades, it has struggled to break into AI and keep up with rivals.

    Instead, Apple has even spending significantly less in AI investments compared to Mark Zuckerberg’s Meta, Amazon, Alphabet, and Microsoft

    Apple has been criticized by analysts this year for not having a clear AI strategy. And despite approving a multibillion-dollar budget to run its own models via the cloud in 2026, it was reported in June that Apple is even considering using models from OpenAI and Anthropic to power its updated version of Siri, rather than using technology the company has built in-house. 

    Its AI-enabled Siri, originally slated for 2025, will be delayed until 2026 or later due to a series of technical challenges, the company announced earlier this year.

    Apple has also lost a number of senior AI team members since January, many of whom have joined Meta’s AI and Superintelligence Labs during talent poaching wars this year. The exodus of Apple’s AI execs included Ruoming Pang, former head of Apple’s foundation models and core generative AI team, who joined Meta with a compensation package reportedly worth $200 million.

    The company is also dealing with increased competition from one of its most influential former employees.

    In May, Sam Altman’s OpenAI acquired startup io for about $6.5 billion, bringing in former Apple chief designer Jony Ive to build AI devices. The 58-year-old designer was instrumental in creating the iPhone, iPod, and iPad. 

    Cook, Apple’s former operations chief, turned 65 this month. He has grown the company’s market capitalization to $4 trillion from $350 billion in 2011, when he took over the CEO role from company co-founder Steve Jobs.

    Under Cook, Apple became the first publicly traded company to reach $1 trillion in market capitalization in 2018—then it became the first company to reach $3 trillion in market cap in 2022.

    But more recently, its stock price has been lagging behind Big Tech rivals Alphabet, Nvidia, and Microsoft, though Apple is trading close to an all-time high after strong earnings were reported in October.

    Apple has also dealt with tariff complications as U.S.-China trade tensions have disrupted its supply chain.

    Cook has previously said he’d prefer an internal candidate to replace him, adding that the company has “very detailed succession plans.”

    “I really want the person to come from within Apple,” Cook told singer Dua Lipa last year on her podcast At Your Service.

    [ad_2]

    Nino Paoli

    Source link

  • ‘Imagine a Cube Floating in the Air’: The New AI Dream Allegedly Driving Yann LeCun Away from Meta

    [ad_1]

    One of the most important AI scientists in Big Tech wants to scrap the current approach to building human-level AI. What we need, Yann LeCun has indicated, are not large language models, but “world models.”

    LeCun, chief AI scientist of “fundamental AI research” at Meta, is expected to resign from Meta soon according to multiple reports from credible outlets. LeCun is a 65-year-old elder statesman in the world of AI science, and he has had seemingly limitless resources at his disposal working as the big AI brain at one of the world’s largest tech companies.

    Why is he leaving a company that’s been spending lavishly, poaching the most highly-skilled AI experts from other firms, and, according to a July blog post by CEO Mark Zuckerburg, making such astonishing leaps in-house that supposedly the development of “superintelligence is now in sight”?

    He’s actually been hinting at the answer for a long time. When it comes to human-level intelligence, LeCun has become notorious lately for saying LLMs as we currently understand them are duds—no longer worth pursuing, no matter how much Big Tech scales them up. He said in April of last year that “an LLM is basically an off-ramp, a distraction, a dead end.” (The arch AI critic Gary Marcus has ripped into LeCun for “belligerently” defending LLMs from Marcus’ own critiques and then flip-flopping.)

    A Wall Street Journal analysis of LeCun’s career published Friday points to some other possibilities about the reasons for his departure in light of this belief. This past summer, a 28-year-old named Alexandr Wang—the co-creator of the LLM-based sensation ChatGPT—became the head of AI at Meta, making an upstart LLM fanatic LeCun’s boss. And Meta brought in another relatively young chief scientist to work above LeCun this year, Shengjia Zhao. Meta’s announcement of Zhao’s new role touts a scaling “breakthrough” he apparently delivered. LeCun says he has lost faith in scaling.

    If you’re wondering how LeCun can be a chief scientist if Zhao is also a chief scientist, it’s because Meta’s AI operation sounds like it has an eccentric org chart, split into multiple, separate groups. Hundreds of people were laid off last month, apparently in an effort to straighten all this out.

    The Financial Times’ report on LeCun from earlier this week suggests that LeCun will now found a startup focused on “world models.” 

    Again, LeCun has not been shy about why he thinks world models have the answers AI needs. He gave a detailed speech about this at the AI Action Summit in Paris back in February, but it got kind of overshadowed by the U.S. representative, Vice President J.D. Vance, giving a bellicose speech about how everyone had better get out of America’s way on AI. 

    Why Is Yann LeCun fascinated by world models?

    As spelled out in his speech—LeCun, who worked on the Meta AI smart glasses, but not to a significant degree on Meta’s Llama LLM—is a huge believer in wearables.

    We’ll need to interact with future wearables as if they are people, he thinks, and LLMs simply don’t understand the world like people do. With LLMs, he says, “we can’t even reproduce cat intelligence or rat intelligence, let alone dog intelligence. They can do amazing feats. They understand the physical world. Any housecat can plan very highly complex actions. And they have causal models of the world.” 

    LeCun provides a thought experiment to illustrate what he thinks might prompt—if you will—a world model, and it’s something he thinks any human can easily do that an LLM simply cannot: 

    “If I tell you ‘imagine a cube floating in the air in front of you. Okay now rotate this cube by 90 degrees around a vertical axis. What does it look like?’ It’s very easy for you to kind of have this mental model of a cube rotating.”  

    With very little effort, an LLM can write a dirty limerick about a hovering, rotating cube, sure, but it can’t really help you interact with one. LeCun avers that this is because of a difference between text data and data derived from processing the many parts of the world that aren’t text. While LLMs are trained on an amount of text it would take 450,000 years to read, LeCun says, a four-year-old child who has been awake for 16,000 hours has processed, with their eyes or by touching, 1.4 x 10^14bytes of sensory data about the world, which he says is more than an LLM.

    These, by the way, are just the estimates LeCun gives in his speech, and it should be noted that he has given others. The abstraction the numbers are pointing to, however, is that LLMs are limited in ways that LeCun thinks world models would not be. 

    What model does LeCun want to build, and how will he build it?

    LeCun has already begun working on world models at Meta—including making an introductory video that implores you to imagine a rotating cube.

    The model of LeCun’s dreams as described in his AI Action Summit speech contains a current “estimate of the state of the world,” in the form of some sort of abstract representation of, well, everything, or at least everything that’s relevant in the current context, and rather than sequential, tokenized prediction, it “predicts the resulting state of the world that will occur after you take that sequence of actions.” 

    World models will allow future computer scientists to build, he says, “systems that can plan actions—possibly hierarchically—so as to fulfill an objective, and systems that can reason.” LeCun also insists that such systems will have more robust safety features, because the ways we control them will be built into them, rather than being mysterious black boxes that spit out text, and which have to be refined by fine tuning. 

    In what LeCun says is classical AI—such as the software used in a search engine—all problems are reducible to optimization. His world model, he suggests, will look at the current state of the world, and seek compatibility with some different state by finding efficient solutions. “You want an energy function that measures incompatibility, and given an x, find a y that has low energy for that x,” LeCun says in his speech.  

    Again, these are just credible reports from leaked information about LeCun’s plans, and he hasn’t even confirmed that he’s founding something new. If everything we can cobble together from LeCun’s public statements sounds tentative and a bit fuzzy at the current phase, it should. LeCun sounds like he has a moonshot in mind, and he’s pushing for another ChatGPT-like explosion of uncanny abilities. It could take ages—or literally forever—not to mention billions of investor dollars, for anything truly remarkable to materialize. 

    Gizmodo reached out to Meta for comment on how LeCun’s work fits into the company’s AI mission, and will update if we hear back. 

    [ad_2]

    Mike Pearl

    Source link

  • Ex-Meta exec says Mark Zuckerberg taught him a lesson in work-life balance: Now he has strict rules for meetings and emails at his $1 billion tax firm | Fortune

    [ad_1]

    When Martin Ott joined Facebook to lead its Northern and Central Europe operations as MD in 2012, the company was pre-IPO, pivoting from desktop to mobile phones, and had just a few thousand employees globally. 

    He’s one of the few leaders who witnessed Meta’s evolution firsthand from its scrappy early days under a twenty-something-year-old Mark Zuckerberg to one of the world’s most powerful platforms. 

    But the biggest lesson he took away from that period wasn’t about scale or speed—or grinding all hours of the day to make it. Ott credits Zuckerberg with teaching him the opposite: To focus on making the biggest impact you can during working hours.

    “One of the things I’m also passing on is, there’s only so many hours in a day,” Ott, who’s now CEO of Taxfix, the Berlin-based tax app valued at more than $1 billion, tells Fortune

    “Ask yourself, what is the real one thing you could do today to really have impact, make a difference? Ask yourself, do you need to be in that meeting or not?” 

    Tech billionaires say you need to work 24/7 to make it, but Ott says you’ll just burn out 

    It’s a refreshing stance, when so many tech leaders say the only way to make it is by always being on. 

    Lucy Guo, the cofounder of Scale AI and the world’s youngest female self-made billionaire, wakes up at 5:30 a.m. and ends her day at midnight. She previously told Fortune that people who crave balance are in the wrong job.

    Meanwhile, Twilio’s CEO Khozema Shipchandler previously told Fortune that the only gap he allows himself “to not think about work is six to eight hours on Saturdays.” 

    And then there’s Reid Hoffman, the visionary behind LinkedIn, who has said that work-life balance simply isn’t possible in the start up world—not least for founders. With the exception of dinner with family, he even admitted he expects employees to constantly be working.

    “That 24/7 only works so long,” Ott says, while adding that switching off is not only important for leaders, but also those working under them. “It’s also protecting team members from getting burned out. You don’t ever want to get there.” 

    “It is making sure that you’re not about 24/7 constant on, but being deliberate.”

    Balance and boundaries for emails and meetings

    As well as focusing only on the meetings where he can make a real impact, Ott has built deliberate practices to protect both his own and his team’s boundaries. 

    “So the most important thing is I structure my day.” Ott gets up early most mornings at around 5:30 a.m. and reads for half an hour before working out.

    “I exercise in the mornings, I go running here on the lake,” he says, adding that he tries to stay in touch with a support network and meditates for his mental health, too. “At times, I meditate every day, and then I drop it. Now I’m in the phase where I’ve dropped it and want to pick it up again.” 

    But even if Ott starts his day early, drafting emails before meetings begin, he’ll make sure they don’t land in his team’s inbox until they start work: “I start writing Slack messages and emails. Often, they only go out with a scheduling function at 8 a.m. or 9 a.m. So I don’t pull people out of their free time, which they need to recharge, because it is a marathon.”

    “Everyone tells you, when you start a company, or you’re running a company, there will be ups and downs. There will be constant crises. There’s a lot of pressure as well,” Ott adds. “You need to make sure you see it actually as a marathon, not a sprint. And that also means you have to maintain the high performance over a long period of time. And that doesn’t work 24/7.”

    [ad_2]

    Orianna Rosa Royle

    Source link

  • As Spyware Companies Get Chummy with White House, Apple and WhatsApp Say They’ll Protect Your Phone

    [ad_1]

    Statements that they’ll help thwart “mercenary spyware” are putting Apple and Meta on the side of platform users with fears about spying tools.

    The Guardian reports that two spyware firms with ties to Israel are seeking to “make inroads with the Trump administration.” Those companies include the NSO Group—the notorious seller of the powerful Pegasus mobile spyware—and a firm called Paragon, which has previously contracted with the government.

    Due to its many, many controversies over the years, NSO has had its fair share of financial problems, but the Israeli firm was recently bought by a U.S.-based group of investors. David Friedman, who previously served as Trump’s ambassador to Israel during his first administration, has been named NSO’s new head executive.

    Recently, Friedman told the Wall Street Journal that he wanted to cozy up to the White House and sell NSO’s services to American law enforcement agencies. “If the administration, as I expect they’ll be, is receptive to considering any opportunity that might keep Americans safer, it will consider us,” Friedman told the newspaper.

    Paragon, meanwhile, is another Israeli spyware firm that was also recently purchased by an American company. Last December, Paragon, maker of a piece of spyware called Graphite, was acquired by a U.S. investment firm called Red Lattice, Reuters previously reported. The Guardian notes that Paragon has worked in the past with the U.S. government, having “entered an agreement with ICE in 2024, under the Biden administration.” The outlet writes: 

    Several people who spoke on the condition of anonymity said the relatively small contract had slipped under the White House’s radar until it was reported by Wired. The contract was then paused in order to determine whether the contract met the requirements of an ambitious executive order that had been signed by the White House in May 2023 and prohibited the operational use of spyware that poses “risks to national security or has been misused by foreign actors to enable human rights abuses around the world.”

    NSO has been accused of letting its products hack into some of the most prominent web messengers and platforms—including Meta’s WhatsApp and Apple’s iMessage. Paragon has also been accused of allowing its tool Graphite to target WhatsApp users. Now, The Guardian reports that both of those companies are pledging that they will protect mobile users from any future spyware.

    A spokesperson from Apple told The Guardian: “Threat notifications are designed to inform and assist users who may have been individually targeted by mercenary spyware and geographic location is not a factor in who they are sent to.” Apple did not respond to Gizmodo’s request for comment.

    When reached for comment by Gizmodo, a Meta spokesperson said: “WhatsApp’s priority is to protect our users by disrupting hacking efforts by mercenary spyware, building new layers of protection and alerting people whose device has come under threat, no matter where they are in the world.”

    Gizmodo reached out to NSO for comment. It was unclear how to reach Paragon Solutions, as its website didn’t appear to have a contact portal. NSO has previously claimed that its products do not target U.S. citizens.

    Authorities in the U.S. have had their eyes on these spyware firms for some time—albeit for different, often contradictory, reasons. On the one hand, in 2021, the Biden administration acknowledged that companies like NSO were having a detrimental impact and blacklisted it from U.S. investment. On the other hand, the FBI also spent years mulling whether to use the spyware for domestic law enforcement investigations. Now, the two powerful cyberweapons distributors seem to be attempting to cozy up to the Trump administration.

    [ad_2]

    Lucas Ropek

    Source link