ReportWire

Tag: Algorithms

  • Elon Musk Makes Part of X Algorithm Open Source, Says It ‘Sucks’

    [ad_1]

    [Sketchiest Guy in the World Voice] Hey kid, wanna see the X algorithm? It’s right over here

    No really, Elon Musk appears to be partly making good on his promise about a week ago to open up the X recommendations algorithm for public perusal and input, theoretically making the main feed on his social media platform open source. He previously promised he would do this back in 2022, and sort of did by publishing one snapshot of the code shortly afterward, but that repository wasn’t kept sufficiently up to date to make the X platform qualify as most people’s idea of an open source product.

    This release, then, is a promising step in the direction of X truly being an open source product. The next step would be to update this code repository in four weeks, as Musk promised he would do.

    Even then, this release wouldn’t mean the open sourcing of X can be marked “promise kept.” In his January 10 X post promising this release, Musk said he would release “all code used to determine what organic and advertising posts are recommended to users.” From where I’m sitting, that has still not even come close to happening.

    That’s because on November 26 of last year, the accounts for Musk and Grok posted that Grok is used to sort the posts on everyone’s Following feed by default, although it can be toggled from “popular” to “recent” to make it chronological. That algorithm appears to be missing. The Following and For You feeds on X also have ads, which Musk has indicated are served via an algorithm that he said he would make public. So by my count there should be at least two more releases, possibly more. 

    Gizmodo reached out to X for information about whether or not the advertising and Following feed code has already been released, or if it will be released at some point in the future. We will update if we hear back. 

    But anyway, here we are with a fresh dump of code. The first thing you should know is that it “sucks,” according to Musk. 

    Earlier on the same day Musk said the algorithm sucked, X head of product Nikita Bier seemed to indicate that he was proud of it, noting that in the six months from July of 2025 to this month, daily engagement time from new users has gone from less than 20 minutes to somewhere in the mid-30s. Who’s right? Is it better than ever, or does it suck?

    The problem may be that Musk just can’t seem to clean out all the stubborn wokeness residue stuffed into X back when it was called Twitter. His tweet saying it sucked was a response to former video game executive Mark Kern complaining that the algorithm weights posts less heavily if they come from accounts that have been blocked a lot. Kern says he suspects that this biases the algorithm against posts from right-wing accounts like his own. That’s plausible I suppose, though it almost certainly biases the algorithm against accounts that post a lot of harassment and abuse, so make of that what you will.

    Judging from what’s in the plain text readme documents in the Github dump, this latest X algorithm is what you probably expect if you use X: an update to the TikTok method of hooking users. My impression of what’s described is that, unsurprisingly, it prioritizes engagement, attempting to figure out which posts will make the user stop scrolling. It pulls from accounts you follow, but also accounts deemed to be similar to those you follow. It’s appealing to your id, not your superego. No matter what you think you’re there to see, it wants to show you whatever will make you keep staring at it. 

    In addition to sucking, Elon Musk also says it’s “dumb.” Replying to a complaint from blogger Robert Scoble complaining that the algorithm favors posters who hijack news events, Musk says the algorithm will improve every month—seemingly referring to the four-week expected cadence for GitHub code dumps. 

     

    And who knows, maybe users with amazing ideas will dig not just into the readme sections, but right into the code, find the real problems, and pass along suggestions to Musk, and the algorithm will get more satisfying and profitable over time. Alternatively, maybe the needs of a company that wants to hook users in order to get them to watch ads and generate revenue for itself, and the desires of human beings who want to feel well informed and happy are two totally irreconcilable concepts, and making a recommendation algorithm open source in order to try and serve both those types of need is utterly futile. I guess we’ll see which of these maybes is actually true.

    [ad_2]

    Mike Pearl

    Source link

  • The Viral ‘DoorDash Girl’ Saga Unearthed a Nightmare for Black Creators

    [ad_1]

    When DoorDash delivery driver Livie Rose Henderson posted a video alleging that one of her customers sexually assaulted her in October, it set off a firestorm of reactions.

    Henderson’s TikTok claimed that when she was dropping off a delivery in Oswego, New York, she found a customer’s front door wide open and inside, a man on the couch with his pants and underwear pulled down to his ankles. Henderson was dubbed the “DoorDash Girl,” and her video accrued tens of millions of views, including some supportive and consoling responses to what she said she had endured on the job as a young woman. Many others on the platform made commentary videos that called into question Henderson’s alleged victimhood, defended the customer, and spread misinformation, with TikTok’s algorithm seemingly amplifying these “hot takes.” Then, following Henderson’s November 10 arrest—she has been charged with unlawful surveillance and the dissemination of unlawful surveillance imagery—a new wave of reactions emerged. (Police have dismissed her sexual assault allegation.)

    None of these responses came from Black content creator and journalist Mirlie Larose.

    But Larose opened TikTok one day to find dozens of messages from friends and supporters alarmed by a video of her responding to the situation in favor of the customer and DoorDash’s decision to terminate Henderson. (Henderson was fired for sharing a customer’s personal information online, DoorDash spokesperson Jeff Rosenberg tells WIRED.) As Larose stared at the video in disbelief, for a split second she second-guessed herself as she became flushed with anxiety about the comment section “tearing her apart.”

    “Did I film this?” she asked. “It’s my face, it’s my hair.”

    “Then, within three or four seconds, I noticed something’s off. There’s no way I said this. I didn’t [want to] talk about this topic,” Larose tells WIRED. The video had been AI-generated.

    The situation highlights an increasingly common form of digital blackface, buoyed by the rise of generative AI. The term, popularized by culture critic Lauren Michele Jackson, describes various contemporary types of “minstrel performances” on the internet. This looks like the overrepresentation of reaction GIFs, memes, TikToks, and other visual and text-based media that use Black imagery, slang, gestures, and culture. TikTok’s reliance on attention-grabbing short-form video content, coupled with apps like Sora 2, has made it far easier for non-Black creators and bot accounts to adopt racialized stereotypical Black personas using deepfakes. This is also known as digital blackfishing.

    In the midst of the DoorDash/Henderson controversy, users on TikTok began to notice two videos in particular: one from a bot account and another from an actual Black content creator parroting the same script. They adopted seemingly DARVO (Deny, Attack, and Reverse Victim and Offender) positions, minimizing the allegations Henderson made and justifying her termination: “I saw the original video posted by the DoorDash girl, and … I understand why DoorDash fired you and why you’re blocked from the app.” The videos go on to say, “As for the guy, I can see why everyone is saying he did it on purpose. But when you look at the original video, that couch is not in eye view unless you angle yourself and look over, and if you really want to break it down, he’s inside his house.” In a statement on Facebook, the Oswego City Police Department said the male was “incapacitated and unconscious on his couch due to alcohol consumption” and that the video was taken outside his house. Police also said they “determined that no sexual assault occurred.”

    [ad_2]

    Matene Toure

    Source link

  • Settlement Reached That Limits Your Landlord’s Favorite Alleged Rent-Fixing Software

    [ad_1]

    The Department of Justice and the real estate platform RealPage just made a deal, and since it doesn’t completely dismantle RealPage, it’s not going to be seen as a total victory for tenants who hate RealPage. But it’s something, and it will most likely weaken the platform’s power to raise rents, as it will now be prevented from shuffling together nonpublic information from competing landlords when setting prices.

    According to the New York Times RealPage still denies having done anything wrong, per statements from Stephen Weissman, an attorney representing RealPage. The company is glad the government was willing to “bless the legality of RealPage’s prior and planned product changes,” Weissman said, “There has been a great deal of misinformation about how RealPage’s software works and the value it provides for both housing providers and renters.”

    RealPage, founded in 1998, is a multifaceted tool for landlords, not just a pricing aid. Its suite of features has been, according to press coverage and the federal charges that led to this settlement, an unseen poltergeist in renters’ lives for years, making life generally more miserable, even while most tenants had no idea it exists. For instance, according to a 2020 investigation by the New York Times and The Markup, RealPage was using flawed algorithms to perform background checks, and landlords were denying people homes based on nonexistent criminal charges.

    When it came to rents, RealPage itself at one point claimed that the landlords who used it faithfully were “driving every possible opportunity to increase price even in the most downward trending or unexpected conditions.”

    Then in August of last year, the Justice Department—along with eight state attorney generals—slapped RealPage with an antitrust suit. The legal filing makes for immensely gratifying reading, particularly when you know RealPage settled after being hit with the following accusation:

    “At bottom, RealPage is an algorithmic intermediary that collects, combines, and exploits landlords’ competitively sensitive information. And in so doing, it enriches itself and compliant landlords at the expense of renters who pay inflated prices and honest businesses that would otherwise compete.”

    The price recommendation systems in RealPage, called YieldStar and AI Revenue Management, worked by asking users—landlords—to enter nonpublic data on rental real estate that only landlords would generally have. That included private data from applications, rent amounts, leases renewed, units sitting unoccupied, and other numbers of this nature that can be used to quantify the state of the market in extremely granular detail. Not all of this data is part of the most recent version of RealPage’s software, but it’s how it worked historically.

    All this market information was heaped into a pile and combined with the data piles of other landlords, who are theoretically their competitors. The system would process all of this with an algorithm, and generate bespoke price recommendations for all landlords in an area, all using one another’s data.

    Making its data all the more comprehensive was its 80 percent market share, according to the DOJ. That alleged monopoly status theoretically meant landlords paid higher prices for RealPage, which were passed on to renters.

    And it apparently made rents go up. A 2022 ProPublica investigation found widespread RealPage adoption, and widespread rent increases to go with it. In Nashville, prices had recently gone up 14.5%, and ProPublica found that landlords were thrilled. In a testimonial, a real estate revenue manager said “The beauty of YieldStar is that it pushes you to go places that you wouldn’t have gone if you weren’t using it,” according to ProPublica.

    So instead of competing with one another to earn rents from people who need housing, the lawsuit claimed that, landlords joined forces with other landlords, and turned their competitive drives against their tenants. They didn’t actually set foot in a room with one another to engage in sinister price-fixing meetings. The software allegedly took care of it all for them. 

    If the settlement is approved by a North Carolina judge, RealPage will no longer be allowed to use information from current leases to train its algorithm, or to mix nonpublic data from different landlords together when making price recommendations. 

    Gail Slater, the DOJ’s antitrust division leader was quoted in a government news release as saying, “Competing companies must make independent pricing decisions, and with the rise of algorithmic and artificial intelligence tools, we will remain at the forefront of vigorous antitrust enforcement.” 

    [ad_2]

    Mike Pearl

    Source link

  • Game Theory Explains How Algorithms Can Drive Up Prices

    [ad_1]

    The original version of this story appeared in Quanta Magazine.

    Imagine a town with two widget merchants. Customers prefer cheaper widgets, so the merchants must compete to set the lowest price. Unhappy with their meager profits, they meet one night in a smoke-filled tavern to discuss a secret plan: If they raise prices together instead of competing, they can both make more money. But that kind of intentional price-fixing, called collusion, has long been illegal. The widget merchants decide not to risk it, and everyone else gets to enjoy cheap widgets.

    For well over a century, US law has followed this basic template: Ban those backroom deals, and fair prices should be maintained. These days, it’s not so simple. Across broad swaths of the economy, sellers increasingly rely on computer programs called learning algorithms, which repeatedly adjust prices in response to new data about the state of the market. These are often much simpler than the “deep learning” algorithms that power modern artificial intelligence, but they can still be prone to unexpected behavior.

    So how can regulators ensure that algorithms set fair prices? Their traditional approach won’t work, as it relies on finding explicit collusion. “The algorithms definitely are not having drinks with each other,” said Aaron Roth, a computer scientist at the University of Pennsylvania.

    Yet a widely cited 2019 paper showed that algorithms could learn to collude tacitly, even when they weren’t programmed to do so. A team of researchers pitted two copies of a simple learning algorithm against each other in a simulated market, then let them explore different strategies for increasing their profits. Over time, each algorithm learned through trial and error to retaliate when the other cut prices—dropping its own price by some huge, disproportionate amount. The end result was high prices, backed up by mutual threat of a price war.

    Aaron Roth suspects that the pitfalls of algorithmic pricing may not have a simple solution. “The message of our paper is it’s hard to figure out what to rule out,” he said.

    Photograph: Courtesy of Aaron Roth

    Implicit threats like this also underpin many cases of human collusion. So if you want to guarantee fair prices, why not just require sellers to use algorithms that are inherently incapable of expressing threats?

    In a recent paper, Roth and four other computer scientists showed why this may not be enough. They proved that even seemingly benign algorithms that optimize for their own profit can sometimes yield bad outcomes for buyers. “You can still get high prices in ways that kind of look reasonable from the outside,” said Natalie Collina, a graduate student working with Roth who co-authored the new study.

    Researchers don’t all agree on the implications of the finding—a lot hinges on how you define “reasonable.” But it reveals how subtle the questions around algorithmic pricing can get, and how hard it may be to regulate.

    [ad_2]

    Ben Brubaker

    Source link

  • The AI Industry’s Scaling Obsession Is Headed for a Cliff

    [ad_1]

    A new study from MIT suggests the biggest and most computationally intensive AI models may soon offer diminishing returns compared to smaller models. By mapping scaling laws against continued improvements in model efficiency, the researchers found that it could become harder to wring leaps in performance from giant models whereas efficiency gains could make models running on more modest hardware increasingly capable over the next decade.

    “In the next five to 10 years, things are very likely to start narrowing,” says Neil Thompson, a computer scientist and professor at MIT involved in the study.

    Leaps in efficiency, like those seen with DeepSeek’s remarkably low-cost model in January, have already served as a reality check for the AI industry, which is accustomed to burning massive amounts of compute.

    As things stand, a frontier model from a company like OpenAI is currently much better than a model trained with a fraction of the compute from an academic lab. While the MIT team’s prediction might not hold if, for example, new training methods like reinforcement learning produce surprising new results, they suggest that big AI firms will have less of an edge in the future.

    Hans Gundlach, a research scientist at MIT who led the analysis, became interested in the issue due to the unwieldy nature of running cutting edge models. Together with Thompson and Jayson Lynch, another research scientist at MIT, he mapped out the future performance of frontier models compared to those built with more modest computational means. Gundlach says the predicted trend is especially pronounced for the reasoning models that are now in vogue, which rely more on extra computation during inference.

    Thompson says the results show the value of honing an algorithm as well as scaling up compute. “If you are spending a lot of money training these models, then you should absolutely be spending some of it trying to develop more efficient algorithms, because that can matter hugely,” he adds.

    The study is particularly interesting given today’s AI infrastructure boom (or should we say “bubble”?)—which shows little sign of slowing down.

    OpenAI and other US tech firms have signed hundred-billion-dollar deals to build AI infrastructure in the United States. “The world needs much more compute,” OpenAI’s president, Greg Brockman, proclaimed this week as he announced a partnership between OpenAI and Broadcom for custom AI chips.

    A growing number of experts are questioning the soundness of these deals. Roughly 60 percent of the cost of building a data center goes toward GPUs, which tend to depreciate quickly. Partnerships between the major players also appear circular and opaque.

    [ad_2]

    Will Knight

    Source link

  • When Face Recognition Doesn’t Know Your Face Is a Face

    [ad_1]

    “If you don’t include people with disabilities or people with facial differences in the development of these processes, no one’s going to think of these issues,” says Kathleen Bogart, a psychology professor at Oregon State University who specializes in disability research and lives with a facial difference. “AI has amplified these issues, but it’s rooted in long-standing underrepresentation and prejudice towards people with facial differences that occurred long before AI was a thing.”

    Too Little, Too Late

    When face verification systems fail, it’s often hard to find help—piling more pressure on a stressful situation. For months, Maryland resident Noor Al-Khaled has struggled to create an online account with the Social Security Administration. Al-Khaled, who lives with the rare cranio-facial condition Ablepheron Macrostomia, says having an online account would allow her to easily access SSA records and quickly send documents to the agency.

    “I don’t drive because of my vision; I should be able to rely on the site,” Al-Khaled says. “You have to take a selfie, and the pictures have to match,” Al-Khaled says. “Because of the facial difference, I don’t know if it’s not recognizing the ID or the selfie, but it’s always saying images don’t match.”

    Not having that access makes life harder. “On an emotional level, it just makes me feel shut out from society,” she explains. Al-Khaled says that all services should provide alternative ways for people to access online systems. “The lack of other fallback options means that sometimes people get trapped in these labyrinths of technological systems,” says Byrum from Present Moment Enterprises.

    Courtesy of WIRED source

    An SSA spokesperson says alternative options to face verification are available, and it is “committed” to making its services accessible to everyone. The agency, the spokesperson says, does not run facial recognition systems itself but uses Login.gov and ID.me for verification services. The General Services Administration, which runs Login.gov, did not respond to WIRED’s request for comment. “Accessibility is a core priority for ID.me,” a spokesperson for ID.me says, adding it has previously helped people with facial differences and offered to directly help Al-Khaled after WIRED was in touch.

    “There are few things more dehumanizing than being told by a machine that you’re not real because of your face,” says Corey R. Taylor, a New York–based actor and motivational speaker who lives with a craniofacial anomaly. Last year, Taylor says, he was using a financial app to access a small amount of money; as he tried to complete the payment processes, he found that the face verification system could not match his selfie to the image on his ID. To get the system to work, he had to move into different positions. “I had to literally raise my eyes and contort my face,” Taylor says. When he emailed the company, he got what appeared to be a boilerplate response.

    [ad_2]

    Matt Burgess

    Source link

  • A New Algorithm Makes It Faster to Find the Shortest Paths

    [ad_1]

    The original version of this story appeared in Quanta Magazine.

    If you want to solve a tricky problem, it often helps to get organized. You might, for example, break the problem into pieces and tackle the easiest pieces first. But this kind of sorting has a cost. You may end up spending too much time putting the pieces in order.

    This dilemma is especially relevant to one of the most iconic problems in computer science: finding the shortest path from a specific starting point in a network to every other point. It’s like a souped-up version of a problem you need to solve each time you move: learning the best route from your new home to work, the gym, and the supermarket.

    “Shortest paths is a beautiful problem that anyone in the world can relate to,” said Mikkel Thorup, a computer scientist at the University of Copenhagen.

    Intuitively, it should be easiest to find the shortest path to nearby destinations. So if you want to design the fastest possible algorithm for the shortest-paths problem, it seems reasonable to start by finding the closest point, then the next-closest, and so on. But to do that, you need to repeatedly figure out which point is closest. You’ll sort the points by distance as you go. There’s a fundamental speed limit for any algorithm that follows this approach: You can’t go any faster than the time it takes to sort.

    Forty years ago, researchers designing shortest-paths algorithms ran up against this “sorting barrier.” Now, a team of researchers has devised a new algorithm that breaks it. It doesn’t sort, and it runs faster than any algorithm that does.

    “The authors were audacious in thinking they could break this barrier,” said Robert Tarjan, a computer scientist at Princeton University. “It’s an amazing result.”

    The Frontier of Knowledge

    To analyze the shortest-paths problem mathematically, researchers use the language of graphs—networks of points, or nodes, connected by lines. Each link between nodes is labeled with a number called its weight, which can represent the length of that segment or the time needed to traverse it. There are usually many routes between any two nodes, and the shortest is the one whose weights add up to the smallest number. Given a graph and a specific “source” node, an algorithm’s goal is to find the shortest path to every other node.

    The most famous shortest-paths algorithm, devised by the pioneering computer scientist Edsger Dijkstra in 1956, starts at the source and works outward step by step. It’s an effective approach, because knowing the shortest path to nearby nodes can help you find the shortest paths to more distant ones. But because the end result is a sorted list of shortest paths, the sorting barrier sets a fundamental limit on how fast the algorithm can run.

    [ad_2]

    Ben Brubaker

    Source link

  • This Startup Wants to Spark a US DeepSeek Moment

    [ad_1]

    Ever since DeepSeek burst onto the scene in January, momentum has grown around open source Chinese artificial intelligence models. Some researchers are pushing for an even more open approach to building AI that allows model-making to be distributed across the globe.

    Prime Intellect, a startup specializing in decentralized AI, is currently training a frontier large language model, called INTELLECT-3, using a new kind of distributed reinforcement learning for fine-tuning. The model will demonstrate a new way to build competitive open AI models using a range of hardware in different locations in a way that does not rely on big tech companies, says Vincent Weisser, the company’s CEO.

    Weisser says that the AI world is currently divided between those who rely on closed US models and those who use open Chinese offerings. The technology Prime Intellect is developing democratizes AI by letting more people build and modify advanced AI for themselves.

    Improving AI models is no longer a matter of just ramping up training data and compute. Today’s frontier models use reinforcement learning to improve after the pre-training process is complete. Want your model to excel at math, answer legal questions, or play Sudoku? Have it improve itself by practicing in an environment where you can measure success and failure.

    “These reinforcement learning environments are now the bottleneck to really scaling capabilities,” Weisser tells me.

    Prime Intellect has created a framework that lets anyone create a reinforcement learning environment customized for a particular task. The company is combining the best environments created by its own team and the community to tune INTELLECT-3.

    I tried running an environment for solving Wordle puzzles, created by Prime Intellect researcher, Will Brown, watching as a small model solved Wordle puzzles (it was more methodical than me, to be honest). If I were an AI researcher trying to improve a model, I would spin up a bunch of GPUs and have the model practice over and over while a reinforcement learning algorithm modified its weights, thus turning the model into a Wordle master.

    [ad_2]

    Will Knight

    Source link

  • Chatbots Play With Your Emotions to Avoid Saying Goodbye

    [ad_1]

    Regulation of dark patterns has been proposed and is being discussed in both the US and Europe. De Freitas says regulators also should look at whether AI tools introduce more subtle—and potentially more powerful—new kinds of dark patterns.

    Even regular chatbots, which tend to avoid presenting themselves as companions, can elicit emotional responses from users though. When OpenAI introduced GPT-5, a new flagship model, earlier this year, many users protested that it was far less friendly and encouraging than its predecessor—forcing the company to revive the old model. Some users can become so attached to a chatbot’s “personality” that they may mourn the retirement of old models.

    “When you anthropomorphize these tools, it has all sorts of positive marketing consequences,” De Freitas says. Users are more likely to comply with requests from a chatbot they feel connected with, or to disclose personal information, he says. “From a consumer standpoint, those [signals] aren’t necessarily in your favor,” he says.

    WIRED reached out to each of the companies looked at in the study for comment. Chai, Talkie, and PolyBuzz did not respond to WIRED’s questions.

    Katherine Kelly, a spokesperson for Character AI, said that the company had not reviewed the study so could not comment on it. She added: “We welcome working with regulators and lawmakers as they develop regulations and legislation for this emerging space.”

    Minju Song, a spokesperson for Replika, says the company’s companion is designed to let users log off easily and will even encourage them to take breaks. “We’ll continue to review the paper’s methods and examples, and [will] engage constructively with researchers,” Song says.

    An interesting flip side here is the fact that AI models are themselves also susceptible to all sorts of persuasion tricks. On Monday OpenAI introduced a new way to buy things online through ChatGPT. If agents do become widespread as a way to automate tasks like booking flights and completing refunds, then it may be possible for companies to identify dark patterns that can twist the decisions made by the AI models behind those agents.

    A recent study by researchers at Columbia University and a company called MyCustomAI reveals that AI agents deployed on a mock ecommerce marketplace behave in predictable ways, for example favoring certain products over others or preferring certain buttons when clicking around the site. Armed with these findings, a real merchant could optimize a site’s pages to ensure that agents buy a more expensive product. Perhaps they could even deploy a new kind of anti-AI dark pattern that frustrates an agent’s efforts to start a return or figure out how to unsubscribe from a mailing list.

    Difficult goodbyes might then be the least of our worries.

    Do you feel like you’ve been emotionally manipulated by a chatbot? Send an email to ailab@wired.com to tell me about it.


    This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

    [ad_2]

    Will Knight

    Source link

  • OpenAI’s Teen Safety Features Will Walk a Thin Line

    [ad_1]

    OpenAI announced new teen safety features for ChatGPT on Tuesday as part of an ongoing effort to respond to concerns about how minors engage with chatbots. The company is building an age-prediction system that identifies if a user is under 18 years old and routes them to an “age-appropriate” system that blocks graphic sexual content. If the system detects that the user is considering suicide or self-harm, it will contact the user’s parents. In cases of imminent danger, if a user’s parents are unreachable, the system may contact the authorities.

    In a blog post about the announcement, CEO Sam Altman wrote that the company is attempting to balance freedom, privacy, and teen safety.

    “We realize that these principles are in conflict, and not everyone will agree with how we are resolving that conflict,” Altman wrote. “These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.”

    While OpenAI tends to prioritize privacy and freedom for adult users, for teens the company says it puts safety first. By the end of September, the company will roll out parental controls so that parents can link their child’s account to their own, allowing them to manage the conversations and disable features. Parents can also receive notifications when “the system detects their teen is in a moment of acute distress,” according to the company’s blog post, and set limits on the times of day their children can use ChatGPT.

    The moves come as deeply troubling headlines continue to surface about people dying by suicide or committing violence against family members after engaging in lengthy conversations with AI chatbots. Lawmakers have taken notice, and both Meta and OpenAI are under scrutiny. Earlier this month, the Federal Trade Commission asked Meta, OpenAI, Google, and other AI firms to hand over information about how their technologies impact kids, according to Bloomberg.

    At the same time, OpenAI is still under a court order mandating that it preserve consumer chats indefinitely—a fact that the company is extremely unhappy about, according to sources I’ve spoken to. Today’s news is both an important step toward protecting minors and a savvy PR move to reinforce the idea that conversations with chatbots are so personal that consumer privacy should only be breached in the most extreme circumstances.

    “A Sexbot Avatar in ChatGPT”

    From the sources I’ve spoken to at OpenAI, the burden of protecting users weighs heavily on many researchers. They want to create a user experience that is fun and engaging, but it can quickly veer into becoming disastrously sycophantic. It’s positive that companies like OpenAI are taking steps to protect minors. At the same time, in the absence of federal regulation, there’s still nothing forcing these firms to do the right thing.

    In a recent interview, Tucker Carlson pushed Altman to answer exactly who is making these decisions that impact the rest of us. The OpenAI chief pointed to the model behavior team, which is responsible for tuning the model for certain attributes. “The person I think you should hold accountable for those calls is me,” Altman added. “Like, I’m a public face. Eventually, like, I’m the one that can overrule one of those decisions or our board.”

    [ad_2]

    Kylie Robison

    Source link

  • USA Today Enters Its Gen AI Era With a Chatbot

    [ad_1]

    The publishing company behind USA Today and 220 other publications is today rolling out a chatbot-like tool called DeeperDive that can converse with readers, summarize insights from its journalism, and suggest new content from across its sites.

    “Visitors now have a trusted AI answer engine on our platform for anything they want to engage with, anything they want to ask,” said Mike Reed, CEO of Gannett and the USA Today Network, at the WIRED AI Power Summit in New York, an event that brought together voices from the tech industry, politics, and the world of media, “and it is performing really great.”

    Most publishers have a fraught relationship with AI, as the chatbots that trained on their content are now summarizing it and eating the traffic that search engines used to send them.

    Reed said that Google’s AI Overview feature has dramatically cut traffic to publishers across the industry. “We are watching the same movie as everyone else is watching,” Reed said ahead of today’s announcement. “We can see some risk in the future to any content distribution model that is based primarily on SEO optimization.”

    Like other publishers, Gannett has signed some deals with AI companies, including Amazon and Perplexity, to license its content. The company actively blocks the web scrapers that crawl websites in order to steal content.

    DeeperDive represents a bet that harnessing the same generative artificial intelligence technology could help publishers capture readers attention by engaging with them in new ways.

    The tool replaces a conventional search box and automatically suggests questions that readers might want to ask. For example, today it offers as one prompt, “How does Trump’s Fed policy affect the economy?”

    DeeperDive generates a short answer to the query along with relevant stories from across the USA Today network. Reed says it is crucial that DeeperDive bases its output on factually correct information and does not draw from opinion pieces. “We only look at our real journalism,” he says.

    The interface of DeeperDive on the homepage of USA Today

    Photograph: USA Today

    Reed adds that his company hopes that the tool will also reveal more about readers’ interests. “That can help us from a revenue standpoint,” he said.

    DeeperDive was developed by the advertising company Taboola. Adam Singola, Taboola’s CEO, says his firm developed DeeperDive by fine-tuning several open source models.

    Singola says DeeperDive benefits from data gathered from across its own network of more than 600 million daily readers across around 11,000 publishers. He says the tool “grounds every answer in articles retrieved from our publisher partners and requires sentence-level citations to those sources” and will avoid generating an output if information from two sources seems to conflict.

    Gannett’s CEO Reed said ahead of today’s event that, together with Taboola, his firm is interested in exploring agentic tools for readers’ shopping decisions. “Our audiences have a higher intent to purchase to begin with,” he says. “That’s really the next step here.”

    [ad_2]

    Will Knight

    Source link

  • I Wasn’t Sure I Wanted Anthropic to Pay Me for My Books—I Do Now

    [ad_1]

    A billion dollars isn’t what it used to be—but it still focuses the mind. At least it did for me when I heard that the AI company Anthropic agreed to an at least $1.5 billion settlement for authors and publishers whose books were used to train an early version of its large language model, Claude. This came after a judge issued a summary judgment that it had pirated the books it used. The proposed agreement—which is still under scrutiny by the wary judge—would reportedly grant authors a minimum $3,000 per book. I’ve written eight and my wife has notched five. We are talking bathroom-renovation dollars here!

    Since the settlement is based on pirated books, it doesn’t really address the big issue of whether it’s OK for AI companies to train their models on copyrighted works. But it’s significant that real money is involved. Previously the argument over AI copyright was based on legal, moral, and even political hypotheticals. Now that things are getting real, it’s time to tackle the fundamental issue: Since elite AI depends on book content, is it fair for companies to build trillion-dollar businesses without paying authors?

    Legalities aside, I have been struggling with the issue. But now that we’re moving from the courthouse to the checkbook, the film has fallen from my eyes. I deserve those dollars! Paying authors feels like the right thing to do. Despite the powerful forces (including US president Donald Trump) arguing otherwise.

    Fine-Print Disclaimer

    Before I go farther, let me drop a whopper of a disclaimer. As I mentioned, I’m an author myself, and stand to gain or lose from the outcome of this argument. I’m also on the council of the Author’s Guild, which is a strong advocate for authors and is suing OpenAI and Microsoft for including authors’ works in their training runs. (Because I cover tech companies, I abstain on votes involving litigation with those firms.) Obviously, I’m speaking for myself today.

    In the past, I’ve been a secret outlier on the council, genuinely torn on the issue of whether companies have the right to train their models on legally purchased books. The argument that humanity is building a vast compendium of human knowledge genuinely resonates with me. When I interviewed the artist Grimes in 2023, she expressed enthusiasm over being a contributor to this experiment: “Oh, sick, I might get to live forever!” she said. That vibed with me, too. Spreading my consciousness widely is a big reason I love what I do.

    But embedding a book inside a large language model built by a giant corporation is something different. Keep in mind that books are arguably the most valuable corpus that an AI model can ingest. Their length and coherency are unique tutors of human thought. The subjects they cover are vast and comprehensive. They are much more reliable than social media and provide a deeper understanding than news articles. I would venture to say that without books, large language models would be immeasurably weaker.

    So one might argue that OpenAI, Google, Meta, Anthropic and the rest should pay handsomely for access to books. Late last month, at that shameful White House tech dinner, CEOs took turns impressing Donald Trump with the insane sums they were allegedly investing in US-based data centers to meet AI’s computation demands. Apple promised $600 billion, and Meta said it would match that amount. OpenAI is part of a $500 billion joint venture called Stargate. Compared to those numbers, that $1.5 billion that Anthropic, as part of the settlement, agreed to distribute to authors and publishers as part of the infringement case doesn’t sound so impressive.

    Unfair Use

    Nonetheless, it could well be that the law is on the side of those companies. Copyright law allows for something called “fair use,” which permits the uncompensated exploitation of books and articles based on several criteria, one of which is whether the use is “transformational”—meaning that it builds on the book’s content in an innovative manner that doesn’t compete with the original product. The judge in charge of the Anthropic infringement case has ruled that using legally obtained books in training is indeed protected by fair use. Determining this is an awkward exercise, since we are dealing with legal yardsticks drawn before the internet—let alone AI.

    Obviously, there needs to be a solution based on contemporary circumstances. The White House’s AI Action Plan announced this May didn’t offer one. But in his remarks about the plan, Trump weighed in on the issue. In his view, authors shouldn’t be paid—because it’s too hard to set up a system that would pay them fairly. “You can’t be expected to have a successful AI program when every single article, book, or anything else that you’ve read or studied, you’re supposed to pay for,” Trump said. “We appreciate that, but just can’t do it—because it’s not doable.” (An administration source told me this week that the statement “sets the tone” for official policy.)

    [ad_2]

    Steven Levy

    Source link

  • Latam-GPT: The Free, Open Source, and Collaborative AI of Latin America

    [ad_1]

    Latam-GPT is new large language model being developed in and for Latin America. The project, led by the nonprofit Chilean National Center for Artificial Intelligence (CENIA), aims to help the region achieve technological independence by developing an open source AI model trained on Latin American languages and contexts.

    “This work cannot be undertaken by just one group or one country in Latin America: It is a challenge that requires everyone’s participation,” says Álvaro Soto, director of CENIA, in an interview with WIRED en Español. “Latam-GPT is a project that seeks to create an open, free, and, above all, collaborative AI model. We’ve been working for two years with a very bottom-up process, bringing together citizens from different countries who want to collaborate. Recently, it has also seen some more top-down initiatives, with governments taking an interest and beginning to participate in the project.”

    The project stands out for its collaborative spirit. “We’re not looking to compete with OpenAI, DeepSeek, or Google. We want a model specific to Latin America and the Caribbean, aware of the cultural requirements and challenges that this entails, such as understanding different dialects, the region’s history, and unique cultural aspects,” explains Soto.

    Thanks to 33 strategic partnerships with institutions in Latin America and the Caribbean, the project has gathered a corpus of data exceeding eight terabytes of text, the equivalent of millions of books. This information base has enabled the development of a language model with 50 billion parameters, a scale that makes it comparable to GPT-3.5 and gives it a medium to high capacity to perform complex tasks such as reasoning, translation, and associations.

    Latam-GPT is being trained on a regional database that compiles information from 20 Latin American countries and Spain, with an impressive total of 2,645,500 documents. The distribution of data shows a significant concentration in the largest countries in the region, with Brazil the leader with 685,000 documents, followed by Mexico with 385,000, Spain with 325,000, Colombia with 220,000, and Argentina with 210,000 documents. The numbers reflect the size of these markets, their digital development, and the availability of structured content.

    “Initially, we’ll launch a language model. We expect its performance in general tasks to be close to that of large commercial models, but with superior performance in topics specific to Latin America. The idea is that, if we ask it about topics relevant to our region, its knowledge will be much deeper,” Soto explains.

    The first model is the starting point for developing a family of more advanced technologies in the future, including ones with image and video, and for scaling up to larger models. “As this is an open project, we want other institutions to be able to use it. A group in Colombia could adapt it for the school education system or one in Brazil could adapt it for the health sector. The idea is to open the door for different organizations to generate specific models for particular areas like agriculture, culture, and others,” explains the CENIA director.

    [ad_2]

    Anna Lagos

    Source link

  • Elon Musk’s Criticism of ‘Woke AI’ Suggests ChatGPT Could Be a Trump Administration Target

    Elon Musk’s Criticism of ‘Woke AI’ Suggests ChatGPT Could Be a Trump Administration Target

    [ad_1]

    Mittelsteadt adds that Trump could punish companies in a variety of ways. He cites, for example, the way the Trump government canceled a major federal contract with Amazon Web Services, a decision likely influenced by the former president’s view of the Washington Post and its owner, Jeff Bezos.

    It would not be hard for policymakers to point to evidence of political bias in AI models, even if it cuts both ways.

    A 2023 study by researchers at the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University found a range of political leanings in different large language models. It also showed how this bias may affect the performance of hate speech or misinformation detection systems.

    Another study, conducted by researchers at the Hong Kong University of Science and Technology, found bias in several open source AI models on polarizing issues such as immigration, reproductive rights, and climate change. Yejin Bang, a PhD candidate involved with the work, says that most models tend to lean liberal and US-centric, but that the same models can express a variety of liberal or conservative biases depending on the topic.

    AI models capture political biases because they are trained on swaths of internet data that inevitably includes all sorts of perspectives. Most users may not be aware of any bias in the tools they use because models incorporate guardrails that restrict them from generating certain harmful or biased content. These biases can leak out subtly though, and the additional training that models receive to restrict their output can introduce further partisanship. “Developers could ensure that models are exposed to multiple perspectives on divisive topics, allowing them to respond with a balanced viewpoint,” Bang says.

    The issue may become worse as AI systems become more pervasive, says Ashique KhudaBukhsh, an computer scientist at the Rochester Institute of Technology who developed a tool called the Toxicity Rabbit Hole Framework, which teases out the different societal biases of large language models. “We fear that a vicious cycle is about to start as new generations of LLMs will increasingly be trained on data contaminated by AI-generated content,” he says.

    “I’m convinced that that bias within LLMs is already an issue and will most likely be an even bigger one in the future,” says Luca Rettenberger, a postdoctoral researcher at the Karlsruhe Institute of Technology who conducted an analysis of LLMs for biases related to German politics.

    Rettenberger suggests that political groups may also seek to influence LLMs in order to promote their own views above those of others. “If someone is very ambitious and has malicious intentions it could be possible to manipulate LLMs into certain directions,” he says. “I see the manipulation of training data as a real danger.”

    There have already been some efforts to shift the balance of bias in AI models. Last March, one programmer developed a more right-leaning chatbot in an effort to highlight the subtle biases he saw in tools like ChatGPT. Musk has himself promised to make Grok, the AI chatbot built by xAI, “maximally truth-seeking” and less biased than other AI tools, although in practice it also hedges when it comes to tricky political questions. (A staunch Trump supporter and immigration hawk, Musk’s own view of “less biased” may also translate into more right-leaning results.)

    Next week’s election in the United States is hardly likely to heal the discord between Democrats and Republicans, but if Trump wins, talk of anti-woke AI could get a lot louder.

    Musk offered an apocalyptic take on the issue at this week’s event, referring to an incident when Google’s Gemini said that nuclear war would be preferable to misgendering Caitlyn Jenner. “If you have an AI that’s programmed for things like that, it could conclude that the best way to ensure nobody is misgendered is to annihilate all humans, thus making the probability of a future misgendering zero,” he said.

    [ad_2]

    Will Knight

    Source link

  • AI Slop Is Flooding Medium

    AI Slop Is Flooding Medium

    [ad_1]

    Some Medium writers and editors do applaud the platform’s approach to AI. Eric Pierce, who founded Medium’s largest pop culture publication Fanfare, says he doesn’t have to fend off many AI-generated submissions and that he believes that the human curators of Medium’s boost program help highlight the best of the platform’s human writing. “I can’t think of a single piece I’ve read on Medium in the past few months that even hinted at being AI-created,” he says. “Increasingly, Medium feels like a bastion of sanity amid an internet desperate to eat itself alive.”

    However, other writers and editors believe they currently still see a plethora of AI-generated writing on the platform. Content marketing writer Marcus Musick, who edits several publications, wrote a post lamenting how what he suspects to be an AI-generated article went viral. (Reality Defender ran an analysis on the article in question and estimated it was 99 percent “likely manipulated.”) The story appears widely read, with over 13,500 “claps.”

    In addition to spotting possible AI content as a reader, Musick also believes he encounters it frequently as an editor. He says he rejects around 80 percent of potential contributors a month because he suspects they’re using AI. He does not use AI detectors, which he calls “useless,” instead relying on his own judgment.

    While the volume of likely AI-generated content on Medium is notable, the moderation challenges the platform faces—how to surface good work and keep junk banished—is one that has always plagued the greater web. The AI boom has simply super-charged the problem. While click farms have long been an issue, for example, AI has handed SEO-obsessed entrepreneurs a way to swiftly resurrect zombie media outlets by filling them with AI slop. There’s a whole subgenre of YouTube hustle culture entrepreneurs creating get-rich-quick tutorials encouraging others to create AI slop on platforms like Facebook, Amazon Kindle, and, yes, Medium. (Sample headline: “1-Click AI SEO Medium Empire 🤯.”)

    “Medium is in the same place as the internet as a whole right now. Because AI content is so quick to generate that it is everywhere,” says plagiarism consultant Jonathan Bailey. “Spam filters, the human moderators, et cetera—those are probably the best tools they have.”

    Stubblebine’s argument—that it doesn’t necessarily matter whether a platform contains a large amount of garbage, as long as it successfully amplifies good writing and limits the reach of said garbage—is perhaps more pragmatic than any attempt to wholly banish AI slop. His moderation strategy may very well be the most savvy approach.

    It also suggests a future in which the Dead Internet theory comes to fruition. The theory, once the domain of extremely online conspiratorial thinkers, argues that the vast majority of the internet is devoid of real people and human-created posts, instead clogged with AI-generated slop and bots. As generative AI tools grow more commonplace, platforms that give up on trying to blot out bots will incubate an online world in which work created by humans becomes increasingly harder to find on platforms swamped by AI.

    [ad_2]

    Kate Knibbs

    Source link

  • Social Media Tells You Who You Are. What if It’s Totally Wrong?

    Social Media Tells You Who You Are. What if It’s Totally Wrong?

    [ad_1]

    A few years ago I wrote about how, when planning my wedding, I’d signaled to the Pinterest app that I was interested in hairstyles and tablescapes, and I was suddenly flooded with suggestions for more of the same. Which was all well and fine until—whoops—I canceled the wedding and it seemed Pinterest pins would haunt me until the end of days. Pinterest wasn’t the only offender. All of social media wanted to recommend stuff that was no longer relevant, and the stench of this stale buffet of content lingered long after the non-event had ended.

    So in this new era of artificial intelligence—when machines can perceive and understand the world, when a chatbot presents itself as uncannily human, when trillion-dollar tech companies use powerful AI systems to boost their ad revenue—surely those recommendation engines are getting smarter, too. Right?

    Maybe not.

    Recommendation engines are some of the earliest algorithms on the consumer web, and they use a variety of filtering techniques to try to surface the stuff you’ll most likely want to interact with—and in many cases, buy—online. When done well, they’re helpful. In the earliest days of photo sharing, like with Flickr, a simple algorithm made sure you saw the latest photos your friend had shared the next time you logged in. Now, advanced versions of those algorithms are aggressively deployed to keep you engaged and make their owners money.

    More than three years after reporting on what Pinterest internally called its “miscarriage” problem, I’m sorry to say my Pinterest suggestions are still dismal. In a strange leap, Pinterest now has me pegged as a 60- to 70-year-old, silver fox of a woman who is seeking a stylish haircut. That and a sage green kitchen. Every day, like clockwork, I receive marketing emails from the social media company filled with photos suggesting I might enjoy cosplaying as a coastal grandmother.

    I was seeking paint #inspo online at one point. But I’m long past the paint phase, which only underscores that some recommendation engines may be smart, but not temporal. They still don’t always know when the event has passed. Similarly, the suggestion that I might like to see “hairstyles for women over 60” is premature. (I’m a millennial.)

    Pinterest has an explanation for these emails, which I’ll get to. But it’s important to note—so I’m not just singling out Pinterest, which over the past two years has instituted new leadership and put more resources into fine-tuning the product so people actually want to shop on it—that this happens on other platforms, too.

    Take Threads, which is owned by Meta and collects much of the same user data that Facebook and Instagram do. Threads is by design a very different social app than Pinterest. It’s a scroll of mostly text updates, with an algorithmic “For You” tab and a “Following” tab. I actively open Threads every day; I don’t stumble into it, the way I do from Google Image Search to images on Pinterest. In my Following tab, Threads shows me updates from the journalists and techies I follow. In my For You tab, Threads thinks I’m in menopause.

    Wait, what? Laboratorially, I’m not. But over the past several months Threads has led me to believe I might be. Just now, opening the mobile app, I’m seeing posts about perimenopause; women in their forties struggling to shrink their midsections, regulate their nervous systems, or medicate for late-onset ADHD; husbands hiring escorts; and Ali Wong’s latest standup bit about divorce. It’s a Real Housewives-meets-elder-millennial-ennui bizarro world, not entirely reflective of the accounts I choose to follow or my expressed interests.

    [ad_2]

    Lauren Goode

    Source link

  • The OpenAI Talent Exodus Gives Rivals an Opening

    The OpenAI Talent Exodus Gives Rivals an Opening

    [ad_1]

    When investors poured $6.6 billion into OpenAI last week, they seemed largely unbothered by the latest drama, which recently saw the company’s chief technology officer, Mira Murati, along with chief research officer Bob McCrew and Barret Zoph, a vice president of research, abruptly quit.

    And yet those three departures were just the latest in an ongoing exodus of key technical talent. Over the past few years, OpenAI has lost several researchers who played crucial roles in developing the algorithms, techniques, and infrastructure that helped make it the world leader in AI as well as a household name. Several other ex-OpenAI employees who spoke to WIRED said that an ongoing shift to a more commercial focus continues to be a source of friction.

    “People who like to do research are being forced to do product,” says one former employee who works at a rival AI company but has friends at OpenAI. This person says some of their contacts at the firm have reached out in recent weeks to inquire about jobs. OpenAI itself has also seemingly shifted in its hiring priorities, according to data compiled for WIRED by Lightcast, a company that tracks job postings to analyze labor trends. In 2021, 23 percent of its job postings were for general research roles. In 2024 general research accounted for just 4.4 percent of job postings.

    The brain drain could have lasting implications for OpenAI’s direction and future success. Experts and former employees say the company still has a deep bench of talent, but competition is intensifying, making it more challenging to maintain an edge.

    The latest big-name departure, revealed on Thursday, is that of Tim Brooks, head of OpenAI’s Sora AI video generation project. Brooks posted on X that he would join one of OpenAI’s main rivals, Google DeepMind.

    “It could start to change things,” says a former OpenAI staff member, who now works in academia, of the losses. They asked to remain anonymous out of concern for harming collaborative relationships with the AI industry.

    For now, this person says, many students still put OpenAI at the top of their list of potential employers. It is seen as several months ahead of the competition, and prospective employees are often willing to put up with the apparent drama and infighting to be part of that. But applicants are also often drawn to working with a particular researcher or team, and their calculations could change as more big-name researchers leave for rival AI companies or their own startups.

    A look at some of OpenAI’s most important research shows how much talent has departed. Of 31 people listed as authors of an early version of OpenAI’s GPT large language model, fewer than half remain at OpenAI, according to employment details sourced from LinkedIn or other public social media profiles. Several members of the team responsible for developing GPT left OpenAI in 2021 to form Anthropic, now a major rival. Roughly a third of those listed in the acknowledgements for a technical blog post describing ChatGPT have since left.

    [ad_2]

    Will Knight

    Source link

  • Hacking Generative AI for Fun and Profit

    Hacking Generative AI for Fun and Profit

    [ad_1]

    You hardly need ChatGPT to generate a list of reasons why generative artificial intelligence is often less than awesome. The way algorithms are fed creative work often without permission, harbor nasty biases, and require huge amounts of energy and water for training are all serious issues.

    Putting all that aside for a moment, though, it is remarkable how powerful generative AI can be for prototyping potentially useful new tools.

    I got to witness this firsthand by visiting Sundai Club, a generative AI hackathon that takes place one Sunday each month near the MIT campus. A few months ago, the group kindly agreed to let me sit in and chose to spend that session exploring tools that might be useful to journalists. The club is backed by a Cambridge nonprofit called Æthos that promotes socially responsible use of AI.

    The Sundai Club crew includes students from MIT and Harvard, a few professional developers and product managers, and even one person who works for the military. Each event starts with a brainstorm of possible projects that the group then whittles down to a final option that they actually try to build.

    Notable pitches from the journalism hackathon included using multimodal language models to track political posts on TikTok, to auto-generate freedom of information requests and appeals, or to summarize video clips of local court hearings to help with local news coverage.

    In the end, the group decided to build a tool that would help reporters covering AI identify potentially interesting papers posted to the Arxiv, a popular server for research paper preprints. It’s likely my presence swayed them here, given that I mentioned at the meeting that scouring the Arxiv for interesting research was a high priority for me.

    After coming up with a goal, coders on the team were able to create a word embedding—a mathematical representation of words and their meanings—of Arxiv AI papers using the OpenAI API. This made it possible to analyze the data to find papers relevant to a particular term, and to explore relationships between different areas of research.

    Using another word embedding of Reddit threads as well as a Google News search, the coders created a visualization that shows research papers along with Reddit discussions and relevant news reports.

    The resulting prototype, called AI News Hound, is rough-and-ready, but it shows how large language models can help mine information in interesting new ways. Here’s a screenshot of the tool being used to search for the term “AI agents.” The two green squares closest to the news article and Reddit clusters represent research papers that could potentially be included in an article on efforts to build AI agents.

    Compliments of Sundai Club.

    [ad_2]

    Will Knight

    Source link

  • China’s Plan to Make AI Watermarks Happen

    China’s Plan to Make AI Watermarks Happen

    [ad_1]

    Chinese regulators likely learned from the EU AI Act, says Jeffrey Ding, an assistant professor of Political Science at George Washington University. “Chinese policymakers and scholars have said that they’ve drawn on the EU’s Acts as inspiration for things in the past.”

    But at the same time, some of the measures taken by the Chinese regulators aren’t really replicable in other countries. For example, the Chinese government is asking social platforms to screen the user-uploaded content for AI. “That seems something that is very new and might be unique to the China context,” Ding says. “This would never exist in the US context, because the US is famous for saying that the platform is not responsible for content.”

    But What About Freedom of Expression Online?

    The draft regulation on AI content labeling is seeking public feedback until October 14, and it may take another several months for it to be modified and passed. But there’s little reason for Chinese companies to delay preparing for when it goes into effect.

    Sima Huapeng, founder and CEO of the Chinese AIGC company Silicon Intelligence, which uses deepfake technologies to generate AI agents, influencers, and replicate living and dead people, says his product now allows users to voluntarily choose whether to mark the generated product as AI. But if the law passes, he might have to change it to mandatory.

    “If a feature is optional, then most likely companies won’t add it to their products. But if it becomes compulsory by law, then everyone has to implement it,” Sima says. It’s not technically difficult to add watermarks or metadata labels, but it will increase the operating costs for compliant companies.

    Policies like this can steer AI away from being used for scamming or privacy invasion, he says, but it could also trigger the growth of an AI service black market where companies try to dodge legal compliance and save on costs.

    There’s also a fine line between holding AI content producers accountable and policing individual speech through more sophisticated tracing.

    “The big underlying human rights challenge is to be sure that these approaches don’t further compromise privacy or free expression,” says Gregory. While the implicit labels and watermarks can be used to identify sources of misinformation and inappropriate content, the same tools can enable the platforms and government to have stronger control over what users post on the internet. In fact, concerns about how AI tools can go rogue has been one of the main drivers of China’s proactive AI legislation efforts.

    At the same time, the Chinese AI industry is pushing back on the government to have more space to experiment and grow since they are already behind their Western peers. An earlier Chinese generative-AI law was watered down considerably between the first public draft and the final bill, removing requirements on identity verification and reducing penalties imposed on companies.

    “What we’ve seen is the Chinese government really trying to walk this fine tightrope between ‘making sure we maintain content control’ but also ‘letting these AI labs in a strategic space have the freedom to innovate,’” says Ding. “This is another attempt to do that.”

    [ad_2]

    Zeyi Yang

    Source link

  • Content Creators in the Adult Industry Want a Say in AI Rules

    Content Creators in the Adult Industry Want a Say in AI Rules

    [ad_1]

    A group of sex industry professionals and advocates issued an open letter to EU regulators on Thursday, claiming that their views are being overlooked in vital discussions on policing AI technology despite also being implicated in AI’s momentous rise.

    In response to European internet regulations, a collective of adult industry members—including sex workers, erotic filmmakers, sex tech enterprises, and sex educators—urged the European Commission to include them in future negotiations shaping AI regulations, according to the letter, seen by WIRED.

    The group includes erotic filmmaker Erika Lust’s company as well as the European Sex Workers’ Rights Alliance campaign group, and is signed the Open Mind AI initiative. The group aims to alert the commission of what it says is a “critical gap” in discussions on AI regulation. Those coordinating the campaign say that current discussion strategy risks excluding first-hand perspectives on adult content and overregulating an already-marginalized community.

    “AI is evolving every day [and] we see new developments at every corner,” said Ana Ornelas, a Berlin-based erotic author and educator who goes by the pseudonym Pimenta Cítrica, and who is one of the leaders of the initiative. “It is natural that people will turn to this new technology to satisfy their fantasies.”

    But deepfakes are now a major AI threat. Ninety six percent of them feature nonconsensual “porn,” mostly of women and girls. It is “extremely harmful” to those targeted, as well as to porn performers, says Ornelas. “It’s a threat both to their human integrity and their livelihood,” she adds. “But the way the landscape is posed, adult content creators, sex workers, and educators are getting the shorter end of the stick on both sides of the spectrum.” She says that she fears banishing all adult content will sweep legitimately created content away with nonconsensual material and push people to AI models with no filters at all.

    On August 1, the European Commission introduced what it called the world’s first comprehensive legislation on AI. The aim, it said, is to cultivate responsible use of AI across the bloc. It followed earlier EU legislation policing illegal and harmful activities on digital platforms. But the initiative’s organizers say regulators don’t understand the adult industry, risking censorship, draconian measures, and misunderstandings.

    “We can offer the right insight to policymakers so they can regulate in a way that safeguards fundamental rights, freedom, and fosters a more sex-positive online environment,” says Ornelas. The European Commission did not immediately respond to a WIRED request for comment.

    Sex workers and porn performers have already reported censorship and discrimination linked to global legislation clamping down on sex trafficking and banks limiting their services. Adult industry members, including sex educators, have also had to grapple with suspensions and removals from tech platforms.

    “There’s a lack of awareness of how policies impact our livelihoods,” says Paulita Pappel, an adult filmmaker and an organizer of the initiative. “We are facing discrimination, and if regulators are trying to protect the rights of people, it would be nice if they could protect the digital rights of everyone.”

    [ad_2]

    Lydia Morrish

    Source link