ReportWire

Tag: Facebook

  • Inside the Anti-Vax Facebook Group Pushing a Bogus Cure for Autism

    Inside the Anti-Vax Facebook Group Pushing a Bogus Cure for Autism

    [ad_1]

    Czelazewicz is just one of many affiliates who sell Pure Body Extra online, including Larry Cook, one of the best known US anti-vax influencers. Cook and his Stop Mandatory Vaccination group was kicked off Facebook in 2020, but only after it had amassed a following of around 200,000. Today, Cook sells Pure Body Extra as a cure for autism via his Detox for Autism website.

    Pure Body Extra is manufactured by a company called Touchstone Essentials, which was founded in 2012 by Eddie Stone and is based in Raleigh, North Carolina.

    The company sells a variety of other health and wellness products. On the product page for Pure Body Extra on the Touchstone Essentials website, the company says the product is safe “for all ages,” and in a section labeled “science,” the company states that the product’s “capacity to capture toxins, heavy metals, and environmental pollutants is evidenced by more than 300 studies documented on PubMed.”

    However, when WIRED analyzed the 300 studies, it emerged that many were nonhuman trials, including numerous tests on animals. Indeed, over the course of the last 10 years, just seven medical trials on clinoptilolite, the particular type of zeolite used in PBX, had been conducted on humans, all of which were conducted on adults, and some of which didn’t concern detoxification.

    “This is a broader trope in alternative health where [anti-vaxxers] rail against the medical establishment, saying they don’t have your best interests at heart and that you can’t trust ordinary doctors or ordinary medical science, but they do love to cherry-pick studies that seem to show favorable results for some cure that they offer,” says Calum Hood, head of research at the Center for Countering Digital Hate. “They’re then misapplying that science to try and sell people on the idea that a bit of zeolite is going to cure their child’s autism.”

    When asked to provide proof that clinoptilolite was safe for use in children, Touchstone Essentials did not provide a response, but Sonia O’Farrell, the company’s chief marketing officer, told WIRED that the company “does not claim that Pure Body Extra (PBX) can cure or treat autism, or any medical condition for that matter. Pure Body Extra is a dietary supplement featuring natural zeolite to support the body’s detoxification systems. By definition, dietary supplements may not claim to treat, cure, diagnose, or prevent any disease.”

    O’Farrell added that the company does not endorse any individuals who sell its products or how they promote them. “Upon becoming aware of an Affiliate making any medical claims, our compliance team will advise an Affiliate to remove any such materials,” O’Farrell added.

    A statement written in small text at the bottom of the Touchstone Essentials website states: “These statements have not been evaluated by the Food and Drug Administration. Our products are not intended to diagnose, treat, cure, or prevent any disease.”

    The FDA did not respond to a request for comment about the way Pure Body Extra is being promoted online.

    [ad_2]

    David Gilbert

    Source link

  • Terrifying Watch Dogs-Like Smart Glasses Make It Possible To Dox Strangers On The Street

    Terrifying Watch Dogs-Like Smart Glasses Make It Possible To Dox Strangers On The Street

    [ad_1]

    In Ubisoft’s open-world game Watch Dogs (and its sequels), you can quickly scan any NPC you meet and discover facts about them, including their name, address, criminal record, and so on. And now two people have essentially created this tech in real life using Meta’s smart glasses and mostly off-the-shelf tech and software, providing a scary glimpse at our future.

    As reported by 404 Media, two Harvard students have built working smart glasses that use facial recognition technology to automatically identify someone via their face. Not only that, but the glasses then use that information to track down other details about the stranger including their address, phone number, past photos, and family members. According to the two students, AnhPhu Nguyen and Caine Ardayfio, they did this to raise awareness of what is possible with current tech and they have no plans to release it publicly.

    Nguyen and Ardayfio call the project I-XRAY and showed a demo of it in action earlier this week on social media. In the video posted to Twitter, the pair were able to identify multiple strangers without asking them for any details, though some of the data proved to be inaccurate when the duo talked to the people.

    “The motivation for this was mainly because we thought it was interesting, it was cool,” Nguyen told 404 Media. Apparently, other people they showed it to also thought it was “really cool” and some suggested it could be used for “networking” or to “make funny videos.” However, thankfully, someone also mentioned to them how incredibly dangerous this tech could be in the wrong hands. “Some dude could just find some girl’s home address on the train and just follow them home,” said Nguyen.

    As pointed out by 404 Media, this kind of smart-glasses-facial-scanning tech has been around for a few years now. But Google and Facebook, two companies who were working on it, eventually decided to not release their software.

    But you don’t need big tech resources and money to build your own Watch Dogs super glasses that can instantly dox anyone you meet on the street. Nguyen and Ardayfio’s I-XRAY uses Meta’s Ray Bans and the publicly available face recognition service Pimeyes to scan someone’s face with hidden cameras in the glasses and then identify them. That info is then used to scrape the web for phone numbers, other photos, family information, and addresses.

    “We would show people photos of them from kindergarten, and they had never even seen the photo before,” said Ardayfio. “Most people were surprised by how much data they have online.” One time, they were able to show a stranger their mom’s phone number after simply scanning their face.

    “I think people could definitely take [the idea of I-XRAY] and run with it,” Ardayfio said. “If people do run with this idea, I think that’s really bad. I would hope that awareness that we’ve spread on how to protect your data would outweigh any of the negative impacts this could have.” The duo has included information on how to protect yourself in a large document about the project that is freely available online.

     .

    [ad_2]

    Zack Zwiezen

    Source link

  • Ta-Nehisi Coates & Sarah Silverman Win Bid For Meta “Chief Decision Maker” Mark Zuckerberg To Be Deposed In AI Suit

    Ta-Nehisi Coates & Sarah Silverman Win Bid For Meta “Chief Decision Maker” Mark Zuckerberg To Be Deposed In AI Suit

    [ad_1]

    Mark Zuckerberg really doesn’t want to have answer some hard questions about Meta‘s Artificial Intelligence push and goals However, a federal judge this week has told the Facebook founder that is exactly what he has to do.

    “Plaintiffs have made an evidentiary showing that Zuckerberg is the chief decision maker and policy setter for Meta’s Generative AI branch and the development of the large language models at issue in this action,” U.S. District Judge Thomas Hixson noted on September 24 in the potential class action initially filed by authors Sarah Silverman, Richard Kadrey, and Christopher Goldenm last year, and now including Ta-Nehisi Coates and others.

    Along with a more Imperiled suit against OpenAI, the writers have took Meta to court in mid-2023 over copyright infringement concerns that their work and books have been illegally downloaded and used to train the company’s large language model AI software.

    Bedwetter scribe Silverman and National Book Award winner Coates, along with other plaintiffs allege that “much of the material in Meta’s training dataset, however, comes from copyrighted works —including books written by Plaintiffs—that were copied by Meta without consent, without
    credit, and without compensation.”

    With some legal wiggle room here and there, Meta denies they accessed the author’s work for their LLaMA system. Meta’s army of attorneys have also been trying to push the line that there are loads of other people at the tech giant better qualified than Zuckerberg to be questioned by David Boises and other lawyers for the plaintiffs.

    It didn’t fly.

    “Plaintiffs do not generically argue, as Meta suggests, that because Zuckerberg is the CEO of the company that he is therefore in charge of everything,” the judge noted in his order denying Meta’s motion to keep the CEO from having to face Silverman and others lawyers’ inquiries. “Rather, they have submitted evidence of his specific involvement in the company’s AI initiatives. They have submitted evidence indicating Zuckerberg was the principal decision maker concerning Meta’s decision to open source the language model. They have also submitted evidence of Zuckerberg’s direct supervision of Meta’s AI products.”

    Judge Hixon also stated: “Given this factual showing, the Court is not going to require Plaintiffs to exhaust other forms of discovery before they depose Zuckerberg. They’ve made a solid case that this deposition is worth taking.”

    Never a big fan of being put in front of a microphone, Zuckerberg’s depo has yet to have a time and date scheduled. With that, a hearing on discovery in the case just wrapped up earlier this afternoon in San  San Francisco that could see the deposition occurring sooner rather than later.

    By then, everything AI could be different, again.

    Coming up on two years since ChatGPT brought AI to the masses, so to speak, the technology is quickly moving more and more to the fore on almost all aspects of society and industry.

    The results are mixed, depending on your perspective.

    On the one hand, for instance, California Gov. Gavin Newsom signed legislation earlier this month to partially protect the likeness of actors and performers, living and deal. At almost the same time, Lionsgate and applied AI research company Runway unveiled a partnership on September 18 to develop AI customized to the studio’s proprietary portfolio of film and television content like John Wick.

    With a bit of a nose thumbing to the court and nudge towards the seemingly inevitable future, Zuckerberg was on stage today in Menlo Park, California at the company’s Meta Connects conference to speak on all things AI. A part of the roll-out and announcements was the news that Meta’s  AI chatbot will now communicate in the voices of Awkwafina, Dame Judi Dench, Kristin Bell, John Cena, or Keegan-Michael Key.

    Sadly, Zuckerberg will have to give his deposition in his own voice.

    [ad_2]

    Dominic Patten

    Source link

  • Meta Missed Out on Smartphones. Can Smart Glasses Make Up for It?

    Meta Missed Out on Smartphones. Can Smart Glasses Make Up for It?

    [ad_1]

    Meta has dominated online social connections for the past 20 years, but it missed out on making the smartphones that primarily delivered those connections. Now, in a multiyear, multibillion-dollar effort to position itself at the forefront of connected hardware, Meta is going all in on computers for your face.

    At its annual Connect developer event today in Menlo Park, California, Meta showed off its new, more affordable Oculus Quest 3S virtual reality headset and its improved, AI-powered Ray-Ban Meta smart glasses. But the headliner was Orion, a prototype pair of holographic display glasses that chief executive Mark Zuckerberg said have been in the works for 10 years.

    Zuckerberg emphasized that the Orion glasses—which are available only to developers for now—aren’t your typical smart display. And he made the case that these kinds of glasses will be so interactive that they’ll usurp the smartphone for many needs.

    “Building this display is different from every other screen you’ve ever used,” Zuckerberg said on stage at Meta Connect. Meta chief technology officer Andrew Bosworth had previously described this tech as “the most advanced thing that we’ve ever produced as a species.”

    The Orion glasses, like a lot of heads-up displays, look like the fever dream of techno-utopians who have been toiling away in a highly secretive place called “Reality Lab” for the past several years. One WIRED reporter on the ground noted that the thick black glasses looked “chunky” on Zuckerberg.

    As part of the on-stage demo, Zuckerberg showed how Orion glasses can be used to project multiple virtual displays in front of someone, respond quickly to messages, video chat with someone, and play games. In the messages example, Zuckerberg noted that users won’t even have to take out their phones. They’ll navigate these interfaces by talking, tapping their fingers together, or by simply looking at virtual objects.

    There will also be a “neural interface” built in that can interpret brain signals, using a wrist-worn device that Meta first teased three years ago. Zuckerberg didn’t elaborate on how any of this will actually work or when a consumer version might materialize. (He also didn’t get into the various privacy complications of connecting this rig and its visual AI to one of the world’s biggest repositories of personal data.)

    He did say that the imagery that appears through the Orion glasses isn’t pass-through technology—where external cameras show wearers the real world—nor is it a display or screen that shows the virtual world. It’s a “new kind of display architecture,” he said, that uses projectors in the arms of the glasses to shoot waveguides into the lenses, which then reflect light into the wearer’s eyes and create volumetric imagery in front of you. Meta has designed this technology itself, he said.

    The idea is that the images don’t appear as flat, 2D graphics in front of your eyes but that the virtual images now have shape and depth. “The big innovation with Orion is the field of view,” says Anshel Sag, principal analyst at Moor Insights & Strategy, who was in attendance at Meta Connect. “The field of view is 72 degrees, which makes it much more engaging and useful for most applications, whether gaming, social media, or just content consumption. Most headsets are in the 30- to 50-degree range.”

    [ad_2]

    Lauren Goode

    Source link

  • Meta Releases Llama 3.2—and Gives Its AI a Voice

    Meta Releases Llama 3.2—and Gives Its AI a Voice

    [ad_1]

    Mark Zuckerberg announced today that Meta, his social-media-turned-metaverse-turned-artificial intelligence conglomerate, will upgrade its AI assistants to give them a range of celebrity voices, including those of Dame Judi Dench and John Cena. The more important upgrade for Meta’s long-term ambitions, though, is the new ability of its models to see users’ photos and other visual information.

    Meta today also announced Llama 3.2, the first version of its free AI models to have visual abilities, broadening their usefulness and relevance for robotics, virtual reality, and so-called AI agents. Some versions of Llama 3.2 are also the first to be optimized to run on mobile devices. This could help developers create AI-powered apps that run on a smartphone and tap into its camera or watch the screen in order to use apps on your behalf.

    “This is our first open source, multimodal model, and it’s going to enable a lot of interesting applications that require visual understanding,” Zuckerberg said on stage at Connect, a Meta event held in California today.

    Given Meta’s enormous reach with Facebook, Instagram, WhatsApp, and Messenger, the assistant upgrade could give many people their first taste of a new generation of more vocal and visually capable AI helpers. Meta said today that more than 180 million people already use Meta AI, as the company’s AI assistant is called, every week.

    Meta has lately given its AI a more prominent billing in its apps—for example, making it part of the search bar in Instagram and Messenger. The new celebrity voice options available to users will also include Awkwafina, Keegan Michael Key, and Kristen Bell.

    Meta previously gave celebrity personas to text-based assistants, but these characters failed to gain much traction. In July the company launched a tool called AI Studio that lets users create chatbots with any persona they choose. Meta says the new voices will be made available to users in the US, Canada, Australia, and New Zealand over the next month. The Meta AI image capabilities will be rolled out in the US, but the company did not say when the features might appear in other markets.

    The new version of Meta AI will also be able to provide feedback on and information about users’ photos; for example, if you’re unsure what bird you’ve snapped a picture of, it can tell you the species. And it will be able to help edit images by, for instance, adding new backgrounds or details on demand. Google released a similar tool for its Pixel smartphones and for Google Photos in April.

    Powering Meta AI’s new capabilities is an upgraded version of Llama, Meta’s premier large language model. The free model announced today may also have a broad impact, given how widely the Llama family has been adopted by developers and startups already.

    In contrast to OpenAI’s models, Llama can be downloaded and run locally without charge—although there are some restrictions on large-scale commercial use. Llama can also more easily be fine-tuned, or modified with additional training, for specific tasks.

    [ad_2]

    Will Knight

    Source link

  • Newsom to sign California bill to limit  ‘addictive’ social media feeds for kids

    Newsom to sign California bill to limit ‘addictive’ social media feeds for kids

    [ad_1]

    California will take a major step in its fight to protect children from the ills of social media with Gov. Gavin Newsom’s signature on a bill to limit the ability of companies to provide “addictive feeds” to minors.

    The governor’s office said Newsom on Friday will sign Senate Bill 976, named the Protecting Our Kids From Social Media Addiction Act and introduced by state Sen. Nancy Skinner (D-Berkeley). The bill was supported by state Atty. Gen. Rob Bonta and groups such as the Assn. of California School Administrators, Common Sense Media and the California chapter of the American Academy of Pediatrics.

    Newsom’s wife, First Partner Jennifer Siebel Newsom, is also outspoken about the links between social media consumption and low self-esteem, depression and anxiety among youth.

    The legislation attracted an unusual collection of opponents, including the American Civil Liberties Union of California, Equality California and associations representing giants in the industry that own TikTok, Instagram and Facebook. The California Chamber of Commerce argued that the legislation “unconstitutionally burdens” access to lawful content, setting up the potential for another lawsuit in an ongoing court battle between the state and social media companies over use of the platforms by children.

    “Every parent knows the harm social media addiction can inflict on their children — isolation from human contact, stress and anxiety, and endless hours wasted late into the night,” Newsom said. “With this bill, California is helping protect children and teenagers from purposely designed features that feed these destructive habits.”

    The bill, which will take effect Jan. 1, 2027, with Newsom’s signature, prohibits internet service and applications from providing “addictive feeds,” defined as media curated based on information gathered on or provided by the user, to minors without parental consent. SB 976 also bans companies from sending notifications to users identified as minors between midnight and 6 a.m. or during the school day from 8 a.m. to 3 p.m. unless parents give the OK.

    The bill will effectively require companies to make posts from people children know and follow appear in chronological order on their social media feeds instead of in an arrangement to maximize engagement. Proponents of the bill point to warnings from U.S. Surgeon General Vivek Murthy and others about a mental health crisis among youths, which studies show is exacerbated by the use of social media.

    “As a mother, I’m proud of California’s continued leadership in holding technology companies accountable for their products and ensuring those products are not harmful to children. Thank you to the Governor and Senator Skinner for taking a critical step in protecting children and ensuring their safety is prioritized over companies’ profits,” Siebel Newsom said.

    The industry has argued that it’s false to assume that feeds curated by an algorithm are harmful but that a chronological feed is safe. The ACLU also argued that age verification creates potential privacy concerns because it could require the collection of additional user data that could be at risk in a security breach and because it could threaten the 1st Amendment rights of people who cannot verify their age.

    Several groups advocating for LGBTQ+ youths suggested the bill could limit youths’ ability to engage on platforms that offer emotional support for their identities, particularly for kids who live in communities that might be hostile to their identity. Giving more control to parents could also potentially result in parents choosing settings that share sensitive information about the child, the groups said.

    The bill marks the latest action in a battle between state government and social media companies taking place in the California Legislature and the court system over the use of platforms by children.

    In October, Bonta’s office filed a lawsuit with 32 other states against Meta, the parent company of Facebook, Instagram and WhatsApp, alleging that the company designed apps specifically to addict young users while misleading the public about the adverse effects.

    A bill that failed last year in the California Legislature would have made social media companies liable for up to $250,000 in damages if they knowingly promoted features that could harm children. Portions of a 2022 law that sought to require companies to provide privacy protections for children have also been held up in court.

    [ad_2]

    Taryn Luna

    Source link

  • Facebook, YouTube, WhatsApp Surveil, Monetize User Data: FTC | Entrepreneur

    Facebook, YouTube, WhatsApp Surveil, Monetize User Data: FTC | Entrepreneur

    [ad_1]

    In December 2020, the Federal Trade Commission ordered the biggest social media and streaming companies in the world, including Twitch owner Amazon, Facebook (now Meta), YouTube, Reddit, WhatsApp, Twitter (now X), Snap, Discord and TikTok’s ByteDance, to share how they used their users’ personal information.

    On Thursday, FTC staff released a 129-page report, which found that these companies all “harvest an enormous amount of Americans’ personal data and monetize it to the tune of billions of dollars a year,” stated FTC chair Lina M. Khan.

    “While lucrative for the companies, these surveillance practices can endanger people’s privacy, threaten their freedoms, and expose them to a host of harms, from identify theft to stalking,” Khan said.

    Related: The FTC Is Banning Businesses From Writing, Buying Their Own Reviews and Bot Followers

    The report called out major social media companies for collecting vast swaths of personal data and using it in ways their users may not expect. The FTC found, for example, that “many” of these companies buy data from third-party brokers about where a user is located, how much they make per year, and what their interests are, to understand more about a user’s activity on the Internet outside of the social media platform.

    This personal information becomes the basis of targeted ads, which most social media sites rely on for revenue. Meta, the parent company of Facebook, Instagram, WhatsApp, and other products and platforms, reported that 98% of its $39.07 billion revenue in its second quarter came from ads on Facebook and Instagram.

    Related: Federal Judge Blocks FTC’s Noncompete Ban 2 Weeks Before It Would Have Taken Effect — Here’s Why

    According to the FTC report, it’s difficult for users to understand how social media platforms collect their information and how much is used to tailor ads. Many may not even be aware of what’s happening behind the scenes.

    Plus, even if users are tuned in and know that social media platforms are using their data, they still don’t have “any meaningful control over how personal information [is] used,” the FTC report shows.

    Companies use personal information to fuel algorithms, data analytics, and AI that, in turn, shape content recommendations, search, advertising, and other crucial aspects of their business. The FTC recommended that companies be transparent about the data they collect, do more to protect privacy, and put users in charge of data.

    The FTC further found that if a user wants to delete their data, some sites will de-identify the data they have on hand, but keep it on file instead of wiping it all. The platforms that did delete personal data upon request would select which parts to delete and fail to remove all of it, according to the report.

    Related: The FTC Is Suing to Block a Mega-Merger That Would Unite Coach and Michael Kors

    “Companies can and should do more to protect consumers’ privacy, and Congress should enact comprehensive federal privacy legislation that limits surveillance and grants consumers data rights,” the report stated.

    [ad_2]

    Sherin Shibu

    Source link

  • Meta Connect Starts Wednesday. Here’s What to Expect

    Meta Connect Starts Wednesday. Here’s What to Expect

    [ad_1]

    Meta Connect, the big developer event and hardware showcase from the company that runs Facebook and Instagram, is kicking off next week. Meta is likely to show off its new VR and mixed-reality technology, put a shiny polish on its meandering metaverse ambitions, and delve into all the fresh ways it plans to squeeze artificial intelligence into every crevice of its devices and services.

    The event takes place on Wednesday September 25, starting at 10 am Pacific time. The keynote address, where most of the new stuff will be announced, will be livestreamed. The host for the event will be Meta CEO and newly minted cool guy Mark Zuckerberg. Zuck’s hour-long presentation will be followed by a developer-focused address at 11 am led by Meta CTO and Reality Labs chief Andrew Bosworth. You can watch the events on the Meta Connect website or on Meta’s YouTube channel. And yes, you can also watch it in VR in Meta Horizon.

    The focus of the event will likely be a fusion of Meta’s mixed-reality efforts and its AI ambitions across its product line. Like any tech event, there are bound to be surprises. Here are the big things to look out for.

    Blurry MetaVision

    The one thing Meta won’t likely be announcing is a very expensive VR headset. It’s a move informed by where the mixed-reality-device market is right now—and whether people actually want to spend big to buy in. Instead, rumors abound about a so-called Meta Quest 3S, a headset which could be a cheaper version of the Meta Quest 3 with lighter features.

    Meta was briefly the bigwig in the AR/VR space 10 years ago when Meta (then Facebook) bought the VR company Oculus. Shortly thereafter, Facebook changed its name to Meta and sank $45 billion into its vision of a digital universe that most people just don’t seem to give much of a damn about. Workplaces aren’t using Meta’s Horizon Workrooms that much—we’re all still on Zoom—and despite the initial bouts of expensive corporate land grabs for digital real estate, users aren’t exactly eager to move into the metaverse.

    Other companies have struggled to find their virtual footing. Apple released its first-mixed reality headset, the $3,500 Apple Vision Pro, in February. Since then, the product has been regarded as a rare misstep for the company, or at least very clearly a first-generation product not intended for the masses. The device didn’t sell very well and was widely criticized as being an expensive, heavy, and ultimately lonely experience. (Apple mentioned the Vision Pro only once, in passing, at its optimistic iPhone announcement event on September 9.)

    Had the Vision Pro’s, well, vision panned out, Meta may have been more inclined to pursue the pricy premium category of VR headset. In August, The Information reported that Meta seems to have abandoned—or at least delayed—plans to reveal an update to its Oculus Quest Pro that would have gone into the ring against Apple’s Vision Pro. Bosworth, Meta’s CTO, responded to that news on Meta’s Threads platform and insisted the move is not that big of a deal, but rather a natural part of the company’s device iterations. Still, it is a move that makes sense in the aftermath of the Apple Vision Pro fizzling out.

    [ad_2]

    Boone Ashworth

    Source link

  • Social media companies, video streaming services engage in

    Social media companies, video streaming services engage in

    [ad_1]

    Large social media companies and streaming platforms — including Amazon, Alphabet-owned YouTube, Meta’s Facebook and TikTok — engage in a “vast surveillance of users” to profit off their personal information, endangering privacy and failing to adequately protect children, the Federal Trade Commission said Thursday.

    In a 129-page report, the agency examined how some of the world’s biggest tech players collect, use and sell people’s data, as well as the impact on children and teenagers. The findings highlight how the companies compile and store troves of info on both users and non-users, with some failing to comply with deletion requests, the FTC said.

    “The report lays out how social media and video streaming companies harvest an enormous amount of Americans’ personal data and monetize it to the tune of billions of dollars a year,” FTC Chair Lina Khan said in a statement. “While lucrative for the companies, these surveillance practices can endanger people’s privacy, threaten their freedoms, and expose them to a host of harms, from identify theft to stalking.”

    According to the FTC, the business models of major social media and streaming companies centers on mass collection of people’s data, specially through targeted ads, which account for most of their revenue.

    “With few meaningful guardrails, companies are incentivized to develop ever-more invasive methods of collection,” the agency said in the report. 

    “Especially troubling”

    The risk such practices pose to child safety online is “especially troubling,” Khan said.

    Child advocates have long complained that federal child privacy laws let social media services off the hook provided their products are not directed at kids and that their policies formally bar minors on their sites. Big tech companies also often claim not to know how many kids use their platforms, critics have noted.

     “This is not credible,” FTC staffers wrote. 

    Meta on Tuesday launched Instagram Teen Accounts, a more limited experience for younger users of the platform, in an effort to assuage concerns about the impact of social media on kids.

    The report recommends steps, including federal legislation, to limit surveillance and give consumers rights over their data.

    Congress is also moving to hold tech companies accountable for how online content affects kids. In July, the Senate overwhelmingly passed bipartisan legislation aimed at protecting children called the Kids Online Safety Act. The bill would require companies strengthen kids’ privacy and give parents more control over what content their children see online. 


    Child psychiatrist unpacks Instagram’s new Teen Accounts

    06:05

    YouTube-owner Google defended its privacy policies as the strictest in the industry.

    “We never sell people’s personal information, and we don’t use sensitive information to serve ads. We prohibit ad personalization for for users under 18, and we don’t personalize ads to anyone watching ‘made for kids content’ on YouTube,” a Google spokesperson said in an email.

    Amazon, which owns the gaming platform Twitch, did not immediately respond to a request for comment. Meta, which also owns Instagram, declined comment.

    The FTC report comes nearly a year after attorneys general in 33 states sued Meta, saying company for years kept kids online as long as possible to collect personal data to sell to advertisers.

    Meta said at the time that no one under 13 is allowed to have an account on Instagram and that it deletes the accounts of underage users whenever it finds them. “However, verifying the age of people online is a complex industry challenge,” the company said.

    The issue of how Meta’s platforms impact young people also drew attention in 2021 when Meta employee-turned-whistleblower Frances Haugen shared documents from internal company research. In an interview with CBS News’ Scott Pelley, Haugen pointed to data indicating that Instagram worsens suicidal thoughts and eating disorders for certain teenage girls. 

    [ad_2]

    Source link

  • Meta hides warning labels for AI-edited images

    Meta hides warning labels for AI-edited images

    [ad_1]

    Starting next week, Meta will no longer put an easy-to-see label on Facebook images that were edited using AI tools, and it will make it much harder to determine if they appear in their original state or had been doctored. To be clear, the company will still add a note to AI-edited images, but you’ll have to tap on the three-dot menu at the upper right corner of a Facebook post and then scroll down to find “AI Info” among the many other options. Only then will you see the note saying that the content in the post may have been modified with AI.

    Images generated using AI tools, however, will still be marked with an “AI Info” label that can be seen right on the post. Clicking on it will show a note that will say whether it’s been labeled because of industry-shared signals or because somebody self-disclosed that it was an AI-generated image. Meta started applying AI-generated content labels to a broader range of videos, audio and images earlier this year. But after widespread complaints from photographers that the company was flagging even non-AI-generated content by mistake, Meta changed the “Made with AI” label wording into “AI Info” by July.

    The social network said it worked with companies across the industry to improve its labeling process and that it’s making these changes to “better reflect the extent of AI used in content.” Still, doctored images are being widely used these days to spread misinformation, and this development could make it trickier to identify false news, which typically pop up more during election season.

    [ad_2]

    Mariella Moon

    Source link

  • Mark Zuckerberg Vows to Be Neutral–While Tossing Gifts to Trump and the GOP

    Mark Zuckerberg Vows to Be Neutral–While Tossing Gifts to Trump and the GOP

    [ad_1]

    This week Mark Zuckerberg sent a letter to Jim Jordan, the chair of the House Judiciary Committee. For months, the GOP-led committee has been on a crusade to prove that Meta, via its once-eponymous Facebook app, engaged in political sabotage by taking down right-wing content. Its investigation has involved thousands of documents, and the committee interviewed multiple employees, which failed to locate a smoking gun. Now, under the guise of offering his take on the subject, Zuckerberg’s letter is a mea culpa where he seems to indicate that there was something to the GOP conspiracy theory.

    Specifically, he said that in 2021 the Biden administration asked Meta “to censor some Covid-related content.” Meta did take the posts down, and Zuckerberg now regrets the decision. He also conceded that it was wrong to take down some content regarding Hunter Biden’s laptop, which the company did after the FBI warned that the reports might be Russian disinformation.

    What stood out to me, besides the letter’s simpering tone, was how Zuckerberg used the word “censor.” For years the right has been using that word to describe what it regards as Facebook’s systematic suppression of conservative posts. Some state attorneys general have even used that trope to argue that the company’s content should be regulated, and Florida and Texas have passed laws to do just that. Facebook has always contended that the First Amendment is about government suppression, and by definition its content decisions could not be characterized as such. Indeed, the Supreme Court dismissed the lawsuits and blocked the laws.

    Now, by using that term to describe the removal of the Covid material, Zuckerberg seems to be backing down. After years of insisting that, right or wrong, a social media company’s content decisions did not deprive people of First Amendment rights—and in fact said that by making such decisions, the company was invoking its free speech rights—Zuckerberg is now handing its conservative critics just what they wanted.

    I asked Meta spokesperson Andy Stone if the company now agrees with the GOP that some of its decisions to take down content can be referred to as “censoring.” Stone said that Zuckerberg was referring to the government when he used that term. But he also pointed me to Zuckerberg’s affirmation that the ultimate decision to remove the posts was Meta’s own. (Responding to the Zuckerberg letter, the White House said, “When confronted with a deadly pandemic, this Administration encouraged responsible actions to protect public health and safety,” and left the final decision to Facebook.)

    Meta can’t have it both ways, The letter is clear—Zuckerberg said the government pressured Meta to “censor” some Covid content. Meta took that material down. Ergo, Meta now characterizes some of its own actions as censorship. Seizing on this, the GOP members of the Judiciary Committee quickly tweeted that Zuckerberg has now outright admitted “Facebook censored Americans.”

    Stone did say that Meta still does not consider itself a censor. So is Meta disputing that GOP tweet? Stone wouldn’t comment on it. It seems that Meta will offer no pushback while GOP legislators and right-wing commentators crow that Facebook now concedes that it blatantly censored conservatives as a matter of policy.

    Meta’s CEO presented Jordan and the GOP with another gift in his letter, involving his private philanthropy. During the 2020 election, Zuckerberg helped fund nonpartisan initiatives to protect people’s right to vote. Republicans criticized Zuckerberg’s effort as aiding the Democrats. Zuckerberg still insists he wasn’t advocating that people vote a certain way, just ensuring they were free to cast ballots. But, he wrote Jordan, he recognized that some people didn’t believe him. So, apparently to indulge those ill-informed or ill-intentioned critics, he now vows not to fund bipartisan voting efforts during this election cycle. “My goal is to be neutral and not play a role one way or another—or even appear to play a role,” he wrote.

    [ad_2]

    Steven Levy

    Source link

  • Destiny 2 Pointers, How To Nab Fallout 76’s Union Power Armor, And More Of The Week’s Top Tips

    Destiny 2 Pointers, How To Nab Fallout 76’s Union Power Armor, And More Of The Week’s Top Tips

    [ad_1]

    Screenshot: The Gentlebros / Kotaku

    Cat Quest III departs from the first two games of this light-hearted action-adventure series in a variety of ways, especially with its pirate-themed naval combat. Still, it also retains a lot of familiar gameplay mechanics and concepts that ensure if you played the previous games, you’ll feel right at home. Whether you’re a returning player well-versed in Cat Quest’s history, or you’re brand new to the franchise, we’ve compiled a solid list of tips to help you get started in this feline-focused adventure. – Billy Givens Read More

    [ad_2]

    Kotaku Staff

    Source link

  • Colorado authorities warn first day of school pictures could pose safety risks

    Colorado authorities warn first day of school pictures could pose safety risks

    [ad_1]

    JEFFERSON COUNTY, Colo. – As students across Colorado head back to school this month, authorities are warning about social media posts meant to celebrate the new school year.

    Taking a picture of a child on the first day of school is a tradition for many families, but Jefferson County Sheriff’s Office Sergeant Michael Harris with the Child Sex Offenders Internet Investigations Unit (Cheezo) said sharing those photos can come with unintended consequences.

    “Once you send something, whether it’s a message or a photo, you lose all control over that photo. Just like when you have your kid go to the mall, you tell them not to talk to strangers, but yet you’re posting these photos. And if you don’t know everyone in your social media or on your friends list, there could be somebody that takes an interest in your cute child,” Harris said.

    Harris suggests only sending first day of school pictures to family and friends who parents know and trust.

    But Harris said if parents choose to post those pictures on social media, they should double check their privacy settings to make sure only their friends can see them or stick to platforms like WhatsApp which encrypt photos.

    “When we go and teach at schools, we tell the kids, you need to turn off location services, because it shows the exact place where that picture was taken. We don’t want that, because if you’re taking it at home, now they have your home address if you’re taking it at school. Now we know what school you go to,” Harris said.

    Harris said now is the time to be vigilant and put parental controls in place.

    Colorado authorities warn first day of school pictures could pose safety risks

    [ad_2]

    Micah Smith

    Source link

  • Instagram Will Let You Make Custom AI Chatbots—Even Ones Based on Yourself

    Instagram Will Let You Make Custom AI Chatbots—Even Ones Based on Yourself

    [ad_1]

    Meta’a AI Studio handbook says that users can customize a chatbot by providing a detailed description, along with a name and image, and then specifying how it should respond to specific input. Llama will then draw on those instructions to improvise its responses. Meta says Instagram users can “customize their AI based on things like their Instagram content, topics to avoid, and links they want it to share.”

    Over the past year, Meta has become an AI success story thanks to its decision to offer robust AI models for free. Last week, the company released a powerful version of its large language model Llama, providing developers, researchers, and startups with free access to a model comparable to the powerful paid model one behind OpenAI’s ChatGPT. The company says its new chatbots are all based on the latest version of Llama.

    And yet Meta has struggled to find the right tone and niche for its own AI offerings. Last September, the company launched a range of AI chatbots loosely based on real celebrities. These included a fantasy roleplay dungeon master bot based on Snoop Dogg; a wisecracking sports bot based on Tom Brady; and an everyday companion inspired by Kendall Jenner.

    These bots failed to become big hits, however, and Meta has retired them. Jon Carvill, a spokesman for Meta, said the company had learned from the earlier experiments. “AI Studio is an evolution,” he said.

    There is plenty of evidence that users may find fully customizable bots more compelling. A company called Character AI, founded by several ex-Google employees who helped make breakthroughs in AI, has attracted millions of users to its own custom chatbots.

    Zuckerberg also touted other new open source AI advances from Meta at SIGGRAPH. The company has developed a new tool for identifying the contents of images and video called Segment Anything Model (SAM) 2. The previous version is widely used for image analysis. Meta says SAM 2 could be used to more efficiently analyze the contents of video, for instance. Zuckerberg showed off the technology tracking the cattle roaming his Kauai ranch. “Scientists use this stuff to study coral reefs and natural habitats and evolution of landscapes,” he told Huang.

    Earlier in the day, in an on-stage interview with WIRED’s Lauren Goode, Huang, the NVIDIA CEO, said he would “absolutely” want a “Jensen AI” that knows everything he’s ever said, written, and done. “You’ll be able to prompt it, and hopefully something smart gets said,” he said. He could force stock analysts to pepper the bot—instead of him—with questions about the company. “That’s the first thing that has to go,” he said with a laugh.

    [ad_2]

    Will Knight, Paresh Dave

    Source link

  • Sextortion scams run by Nigerian criminals are targeting American men, Meta says

    Sextortion scams run by Nigerian criminals are targeting American men, Meta says

    [ad_1]

    FBI warns parents as sextortion cases involving minors surge


    FBI warns parents as sextortion cases involving minors surge

    02:44

    So-called sextortion scams are on the rise, with criminals from Nigeria frequently targeting adult men in the U.S., according to social media giant Meta. 

    Meta on Wednesday said it has removed about 63,000 accounts from Nigeria that had been attempting to target people with financial sextortion scams. In such scams, criminals pretend to be someone else, typically an attractive woman, in an attempt to trick potential victims into sending nude photos of themselves. Upon receiving nude pics, the scammer then threatens to release the photos unless the sender pays up. 

    Meta’s crackdown on sextortion has included the removal of 200 Facebook pages and 5,700 Facebook groups, all from Nigeria, that were providing tips for conducting such scams, such as scripts for talking with victims. The groups also included links to collections of photos that scammers could use when making fake accounts to catfish victims, Meta said. 

    Meta is also testing new technology that could steer victims away from falling for sextortion scams, such as a new auto-blur feature in Instagram DMs that will blur images if nudity is detected, the company said. 

    “First of all, this goes without saying that financial sextortion is a horrific crime and can have devastating consequences,” said Antigone Davis, Meta’s global head of safety, in a call with reporters. “It’s why we are particularly focused on it right now.”

    The most common platforms for sextortion scams are Instagram, owned by Meta, and Snapchat, according to a recent study from the National Center for Missing & Exploited Children (NCMEC) and Thorn, a nonprofit that uses technology to battle the spread of child sexual abuse material. According to the study, most sextortion scams originate from either Nigeria or Cote d’Ivoire. 

    Indiscriminate scammers

    Meta said it found that scammers are “indiscriminate,” sending requests to many individuals in order to get a few responses, Davis said. While most of the attempts were sent to adult men in the U.S., Meta did see some scammers trying to reach teens, she added. 

    Some of the Facebook accounts, pages and groups removed by Meta were run by the Yahoo Boys, a loose federation of scammers that operate in Nigeria, Davis said. 

    The FBI has sought to highlight the issue of financial sextortion scams targeting teenagers, with the agency noting that at least 20 children who were victims of these scams had died by suicide. Many victims feel fear, embarrassment and concerns about long-term consequences, according to the Thorn and NCMEC report. 

    Social media users should be cautious if an account with a “very stylized, especially good-looking” photo reaches out to them or asks to exchange messages, Davis said. “If you have never been messaged by this person before, that should give you pause,” she added.

    “If somebody sends you an image first, that is often to try to bait you to send an image second, or try to gain trust and build trust,” Davis noted. “This is one of those areas where if you have any suspicion, I would urge caution.”

    Social media users should also look at their privacy settings for messaging, she recommended. For instance, people can control their Facebook Messenger settings to filter the people from whom they can receive messages, such as blocking people other than their Facebook friends. 

    [ad_2]

    Source link

  • Militias Are Recruiting Off of the Trump Shooting

    Militias Are Recruiting Off of the Trump Shooting

    [ad_1]

    Militia and anti-government groups across the United States are using the attempted assassination of former president Donald Trump as an opportunity to organize, recruit, and train.

    “An attack on President Trump was an attack on us, people like us—like-minded American patriots,” says Scot Seddon, the Pennsylvania-based founder of the American Patriots Three Percenters (APIII), in a video posted to TikTok on Sunday. APIII is a decentralized militia network with chapters across the US. “There comes a point in time where everybody in this group needs to start being accountable for what they’re doing to help grow the organization and building a network of like-minded people in their area. Because they’re coming for us.”

    Seddon goes on in the video to say that he’s looking at coordinating a meeting with other militias around Pennsylvania. “This is not going to just go away. We need to become fuckin’ strong, fuckin’ lions,” says Seddon. “Start reaching out to individuals in your state that are trustworthy, that have the like-minded vision of local strong communities, to hold down the fort, just in case [of] war, or for when shit hits the fan.”

    In the aftermath of the shooting at Trump’s campaign rally in Butler, Pennsylvania—which left the former president wounded in his ear, one person dead, and two people injured—incendiary rhetoric and calls for retaliatory violence exploded online.

    Katie Paul, director of the Tech Transparency Project, says that this type of rhetoric has been pretty commonplace in online spaces since 2020, especially since January 6. But she’s particularly concerned about the heightened rhetoric in tandem with aggressive recruitment efforts by militia groups, who historically have opportunistically pounced on moments of national chaos to encourage organizing and training. Paul says the confluence of militia activity and heightened rhetoric could inspire “individuals who are susceptible to online influence and acceleration” who “could be triggered to act on their own.” She also sees militias’ emphasis on organization over knee-jerk calls for retaliatory violence as a sign that the movement is focused on long-term goals and growth.

    In the past year, APIII has made a significant recruitment push across major social media platforms, such as Facebook, X, TikTok, and even NextDoor, according to research from the Tech Transparency Project shared exclusively with WIRED. Despite featuring “Three Percenters” in its name—a clear nod to the militia movement—APIII touts a disclaimer on its website insisting that it is not a militia. That’s in line with the broader trend seen since January 6, 2021, when paramilitary activists scrambled to distance themselves from the militia movement implicated in the Capitol riot.

    But groups like APIII have increasingly been trying to rebuild the militia movement from the ground up, urging people to get organized in their communities. According to Seddon, APIII and the Light Foot Militia, another decentralized paramilitary group with chapters nationwide, have been coordinating closely. Last month, a video circulated on TikTok and Facebook purporting to show a training meetup with APIII and Light Foot in an undisclosed location. About 100 heavily armed men and women in fatigues are shown standing in formation. Text over the video reads: “Now is the time to join a MF’in Militia, Not a Political Party,” and “We came into this world screaming covered in blood and will be leaving the same way. No retreat no surrender.”

    [ad_2]

    Tess Owen

    Source link

  • Fitness guru Richard Simmons dead at 76

    Fitness guru Richard Simmons dead at 76

    [ad_1]

    Richard Simmons, the colorful fitness guru who turned aerobic dancing and positive energy into decades of fame, died Saturday, law enforcement sources said. He was 76.

    Simmons was found at his home, and there was no evidence of foul play, sources told The Times.

    Simmons specialized in helping obese people lose weight, starting with a Los Angeles fitness studio and eventually making appearances on TV shows, including a popular stint on “General Hospital.”

    In his biography, he said struggling with being overweight himself inspired him to help others.

    Over the years, he hosted a variety of shows, produced fitness videos and even had a chain of fitness studios. All the while, he made regular appearances in movies and TV shows.

    In recent years, Simmons had become the subject of fascination, some of it unwanted. He retreated from public view, and some worried about his health.

    In 2017, the “Missing Richard Simmons” podcast revisited the speculation behind Simmons’ welfare, although he refuted many of the rumors.

    Simmons’s representative, Tom Estey, recently told Entertainment Tonight that he was celebrating his 76th birthday by working on a new Broadway musical.

    Simmons, who was active on social media, appeared to be in good spirits in recent days. He posted a black-and-white photograph of himself next to a cake on his birthday to mark the occasion.

    “I never got so many messages about my birthday in my life!” Simmons wrote on Facebook. “I am sitting here writing emails. Have a most beautiful rest of your Friday.”

    It was a marked change of pace from earlier in the year when Simmons had posted cryptic messages ruminating over his mortality.

    “I am … dying,” Simmons wrote on Facebook. “Oh I can see your faces now. The truth is we all are dying. Every day we live we are getting closer to our death. Why am I telling you this? Because I want you to enjoy your life to the fullest every single day. Get up in the morning and look at the sky … count your blessings and enjoy. “

    Simmons had shared in March that he’d been diagnosed with skin cancer. He noted a “strange looking bump” underneath his right eye. He said a dermatologist found it to be basal cell carcinoma, one of the most common forms of skin cancer that can form due to long-term exposure to the sun’s ultraviolet light.

    [ad_2]

    Richard Winton, Tony Briscoe, Hannah Wiley

    Source link

  • Meta rolls back restrictions on Trump’s Facebook and Instagram accounts

    Meta rolls back restrictions on Trump’s Facebook and Instagram accounts

    [ad_1]

    Meta, the parent company of social media platforms such as Facebook and Instagram, has decided to remove restrictions placed on former President Donald Trump’s accounts.

    Meta updated its original statement announcing the end of Trump’s suspension on Facebook and Instagram in January of 2023 to reflect the Republican presumptive presidential nominee’s new online status. Axios first reported on the news.

    Meta removed Trump from all of its platforms following the attack on the US Capitol on Jan. 6, 2021 amid “extreme and highly unusual circumstances,” according to Meta’s original statement.

    Seven people were killed as a result of violence on or collateral damage as a result of the attack on the Capitol building.

    The following May, the Oversight Board ruled that Facebook failed to apply an appropriate penalty with its indefinite suspension of Trump’s accounts for “severely” violating Facebook and Instagram’s community guidelines and standards. Trump said in a video statement released less than three hours after the violence began “We love you. You’re very special” and called the insurrectionists “great patriots.” Those and other statements made in the wake of the US Capitol attack convinced the board that Trump violated its standard against praising or supporting people engaging in violence on its platforms.

    Two years later, Meta restored Trump’s accounts following a time-bound suspension with stricter penalties for violating its terms of service, a standard that was higher than any other user on Facebook and Instagram. Meta noted in its latest update that the ex-president will be subject to the same standard as everyone else.

    “With the party conventions taking place shortly, including the Republican convention next week, the candidates for President of the United States will soon be formally nominated,” according to Meta’s statement. “In assessing our responsibility to allow political expression, we believe that the American people should be able to hear from the nominees for President on the same basis.”

    Twitter, now X, also took action against President Trump in the wake of the Jan. 6 insurrection on the Capitol for three tweets he posted that were labeled for inciting violence. It started with a 12-hour suspension on Jan. 6, 2021. Two days later, Twitter banned him completely after determining that subsequent posts also violated its community standards. The following year, Twitter’s new owner Elon Musk conducted an informal poll on his account asking if he should remove President Trump’s ban and reinstated his account a few days later.

    [ad_2]

    Danny Gallagher

    Source link

  • Meta changes its label from ‘Made with AI’ to ‘AI info’ to indicate use of AI in photos | TechCrunch

    Meta changes its label from ‘Made with AI’ to ‘AI info’ to indicate use of AI in photos | TechCrunch

    [ad_1]

    After Meta started tagging photos with a “Made with AI” label in May, photographers complained that the social networking company had been applying labels to real photos where they had used some basic editing tools.

    Because of the user feedback and general confusion around what level of AI is used in a photo, the company is changing the tag to “AI Info” across all of Meta’s apps.

    Meta said that the earlier version of the tag wasn’t clear enough for users to indicate that the image with the tag is not neccesarily created with AI, but might have used AI-powered tools in the editing process.

    “Like others across the industry, we’ve found that our labels based on these indicators weren’t always aligned with people’s expectations and didn’t always provide enough context. For example, some content that included minor modifications using AI, such as retouching tools, included industry standard indicators that were then labeled ‘Made with AI’,” the company said in an updated blog post.

    Image Credits: Meta

    The company is not changing the underlying technology for detecting use of AI in photos and labeling them. Meta still uses information from technical metadata standards such as C2PA and IPTC that include information about use of AI tools.

    That means, if photographers use tools like Adobe’s Generative AI Fill to remove objects, their photos might still be tagged with the new label. However, Meta hopes that the new label will help people understand that the image with the tag is not always created entierly by AI.

    “‘AI Info’ can encompass content that was made and/or modified with AI so the hope is that this is more in line with people’s expectations, while we work with companies across the industry to improve the process,” Meta spokesperson Kate McLaughlin told TechCrunch over email.

    The new tag will still not solve the problem of completely AI-generated photos going undetected. And it won’t tell users about how much AI-powered editing has been done on an image.

    Meta and other social network will need to work to set guidelines without being unfair to photographers who have not made alterations to their editing workflows, but the tools they used to touch up photos have some generative AI element. On the other hand, companies like Adobe should warn photographers that when they use a certain tool, their image might be tagged with a label on other services.

    [ad_2]

    Ivan Mehta

    Source link

  • Meta’s Pay for Privacy Model Is Illegal, Says EU

    Meta’s Pay for Privacy Model Is Illegal, Says EU

    [ad_1]

    For the past eight months, Europeans uncomfortable with the way Meta tracks their data for personalized advertising have had another option: They can pay the tech giant up to €12.99 ($14) per month for their privacy instead.

    Launched in November 2023, Meta introduced its “pay or consent” subscription model as fines, legal cases and regulatory attention pressured the company to change the way it asks users to consent to targeted advertising. On Monday, however, the European Commision rejected its latest solution, arguing its “pay or consent” subscription is illegal under the bloc’s new digital markets act (DMA).

    “Our preliminary view is that Meta’s “Pay or Consent” business model is in breach of the DMA,” Thierry Breton, Commissioner for the EU’s Internal Market, said in a statement. “The DMA is there to give back to the users the power to decide how their data is used and ensure innovative companies can compete on equal footing with tech giants on data access.”

    Meta denied its subscription model broke the rules. “Subscription for no ads follows the direction of the highest court in Europe and complies with the DMA,” Meta spokesperson Matt Pollard told WIRED, referring to a Court of Justice of the European Union (CJEU) decision in July that said that Meta needed to offer users an alternative to ads, if necessary for an appropriate fee. “We look forward to further constructive dialogue with the European Commission to bring this investigation to a close.”

    In a press briefing on Monday morning, Commission officials said their concern was not that the company was charging for an ad-free service. “This is perfectly fine for us, as long as we have the middle option,” they said, explaining there should be a third option that may still contain ads but are just less targeted. There are different, less-specific ways of providing advertising to users, they added, such as contextual advertising. “The consumer needs to be in a position to choose an alternative version of the service which relies on non personalization of the ads.”

    Under the DMA, very large tech platforms must ask users for consent if they want to share their personal data with other parts of their businesses. In Meta’s case, the Commission said it is particularly concerned about the competitive advantage Meta receives over its rivals by being able to combine the data from platforms like Instagram and its advertising business.

    Meta has a chance to respond to the charges issued on Monday. However if the company cannot reach an agreement with regulators before March 2025, Brussels has the power to levy fines of up to 10 percent of the company’s global turnover.

    In the past week, the EU has issued a series of reprimands to US tech giants. The Commission warned Apple that its App Store is in breach of EU rules for preventing app developers offering promotions directly to their users. Brussels also accused Microsoft of abusing its dominance in the office-software market, following a complaint from rival Slack.

    [ad_2]

    Morgan Meaker

    Source link