ReportWire

Tag: Algorithms

  • An AI Bot Named James Has My Old Local News Job

    An AI Bot Named James Has My Old Local News Job

    [ad_1]

    It always seemed difficult for the newspaper where I used to work, The Garden Island on the rural Hawaiian island of Kauai, to hire reporters. If someone left, it could take months before we hired a replacement, if we ever did.

    So, last Thursday, I was happy to see that the paper appeared to have hired two new journalists—even if they seemed a little off. In a spacious studio overlooking a tropical beach, James, a middle-aged Asian man who appears to be unable to blink, and Rose, a younger redhead who struggles to pronounce words like “Hanalei” and “TV,” presented their first news broadcast, over pulsing music that reminds me of the Challengers score. There is something deeply off-putting about their performance: James’ hands can’t stop vibrating. Rose’s mouth doesn’t always line up with the words she’s saying.

    When James asks Rose about the implications of a strike on local hotels, Rose just lists hotels where the strike is taking place. A story on apartment fires “serves as a reminder of the importance of fire safety measures,” James says, without naming any of them.

    James and Rose are, you may have noticed, not human reporters. They are AI avatars crafted by an Israeli company named Caledo, which hopes to bring this tech to hundreds of local newspapers in the coming year.

    “Just watching someone read an article is boring,” says Dina Shatner, who cofounded Caledo with her husband Moti in 2023. “But watching people talking about a subject—this is engaging.”

    The Caledo platform can analyze several prewritten news articles and turn them into a “live broadcast” featuring conversation between AI hosts like James and Rose, Shatner says. While other companies, like Channel 1 in Los Angeles, have begun using AI avatars to read out prewritten articles, this claims to be the first platform that lets the hosts riff with one another. The idea is that the tech can give small local newsrooms the opportunity to create live broadcasts that they otherwise couldn’t. This can open up embedded advertising opportunities and draw in new customers, especially among younger people who are more likely to watch videos than read articles.

    Instagram comments under the broadcasts, which have each garnered between 1,000 and 3,000 views, have been pretty scathing. “This ain’t that,” says one. “Keep journalism local.” Another just reads: “Nightmares.”

    When Caledo started seeking out North American partners earlier this year, Shatner says, The Garden Island was quick to apply, becoming the first outlet in the country to adopt the AI broadcast tech.

    I’m surprised to hear this, because when I worked as a reporter there last year, the paper wasn’t exactly cutting edge—we had a rather clunky website—and appeared to me to not be in a financial position to be making this sort of investment. As the newspaper industry struggled with advertising revenue decline, the oldest and currently the only daily print newspaper on Kauai, The Garden Island, had shrunk to only a couple reporters listed on its website, tasked with covering every story on an island of 73,000. In recent decades, the paper has been passed around between several large media conglomerates—including earlier this year, when its parent company Oahu Publications’ parent company, Black Press Media, was purchased by Carpenter Media Group, which now controls more than 100 local outlets throughout North America.

    [ad_2]

    Guthrie Scrimgeour

    Source link

  • What You Need to Know About Grok AI and Your Privacy

    What You Need to Know About Grok AI and Your Privacy

    [ad_1]

    But X also makes it clear the onus is on the user to judge the AI’s accuracy. “This is an early version of Grok,” xAI says on its help page. Therefore chatbot may “confidently provide factually incorrect information, missummarize, or miss some context,” xAI warns.

    “We encourage you to independently verify any information you receive,” xAI adds. “Please do not share personal data or any sensitive and confidential information in your conversations with Grok.”

    Grok Data Collection

    Vast amounts of data collection are another area of concern—especially since you are automatically opted in to sharing your X data with Grok, whether you use the AI assistant or not.

    The xAI’s Grok Help Center page describes how xAI “may utilize your X posts as well as your user interactions, inputs and results with Grok for training and fine-tuning purposes.”

    Grok’s training strategy carries “significant privacy implications,” says Marijus Briedis, chief technology officer at NordVPN. Beyond the AI tool’s “ability to access and analyze potentially private or sensitive information,” Briedis adds, there are additional concerns “given the AI’s capability to generate images and content with minimal moderation.”

    While Grok-1 was trained on “publicly available data up to Q3 2023” but was not “pre-trained on X data (including public X posts),” according to the company, Grok-2 has been explicitly trained on all “posts, interactions, inputs, and results” of X users, with everyone being automatically opted in, says Angus Allan, senior product manager at CreateFuture, a digital consultancy specializing in AI deployment.

    The EU’s General Data Protection Regulation (GDPR) is explicit about obtaining consent to use personal data. In this case, xAI may have “ignored this for Grok,” says Allan.

    This led to regulators in the EU pressuring X to suspend training on EU users within days of the launch of Grok-2 last month.

    Failure to abide by user privacy laws could lead to regulatory scrutiny in other countries. While the US doesn’t have a similar regime, the Federal Trade Commission has previously fined Twitter for not respecting users’ privacy preferences, Allan points out.

    Opting Out

    One way to prevent your posts from being used for training Grok is by making your account private. You can also use X privacy settings to opt out of future model training.

    To do so select Privacy & Safety > Data sharing and Personalization > Grok. In Data Sharing, uncheck the option that reads, “Allow your posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning.”

    Even if you no longer use X, it’s still worth logging in and opting out. X can use all of your past posts—including images—for training future models unless you explicitly tell it not to, Allan warns.

    It’s possible to delete all of your conversation history at once, xAI says. Deleted conversations are removed from its systems within 30 days, unless the firm has to keep them for security or legal reasons.

    No one knows how Grok will evolve, but judging by its actions so far, Musk’s AI assistant is worth monitoring. To keep your data safe, be mindful of the content you share on X and stay informed about any updates in its privacy policies or terms of service, Briedis says. “Engaging with these settings allows you to better control how your information is handled and potentially used by technologies like Grok.”

    [ad_2]

    Kate O’Flaherty

    Source link

  • OpenAI Warns Users Could Become Emotionally Hooked on Its Voice Mode

    OpenAI Warns Users Could Become Emotionally Hooked on Its Voice Mode

    [ad_1]

    In late July, OpenAI began rolling out an eerily humanlike voice interface for ChatGPT. In a safety analysis released today, the company acknowledges that this anthropomorphic voice may lure some users into becoming emotionally attached to their chatbot.

    The warnings are included in a “system card” for GPT-4o, a technical document that lays out what the company believes are the risks associated with the model, plus details surrounding safety testing and the mitigation efforts the company’s taking to reduce potential risk.

    OpenAI has faced scrutiny in recent months after a number of employees working on AI’s long-term risks quit the company. Some subsequently accused OpenAI of taking unnecessary chances and muzzling dissenters in its race to commercialize AI. Revealing more details of OpenAI’s safety regime may help mitigate the criticism and reassure the public that the company takes the issue seriously.

    The risks explored in the new system card are wide-ranging, and include the potential for GPT-4o to amplify societal biases, spread disinformation, and aid in the development of chemical or biological weapons. It also discloses details of testing designed to ensure that AI models won’t try to break free of their controls, deceive people, or scheme catastrophic plans.

    Some outside experts commend OpenAI for its transparency but say it could go further.

    Lucie-Aimée Kaffee, an applied policy researcher at Hugging Face, a company that hosts AI tools, notes that OpenAI’s system card for GPT-4o does not include extensive details on the model’s training data or who owns that data. “The question of consent in creating such a large dataset spanning multiple modalities, including text, image, and speech, needs to be addressed,” Kaffee says.

    Others note that risks could change as tools are used in the wild. “Their internal review should only be the first piece of ensuring AI safety,” says Neil Thompson, a professor at MIT who studies AI risk assessments. “Many risks only manifest when AI is used in the real world. It is important that these other risks are cataloged and evaluated as new models emerge.”

    The new system card highlights how rapidly AI risks are evolving with the development of powerful new features such as OpenAI’s voice interface. In May, when the company unveiled its voice mode, which can respond swiftly and handle interruptions in a natural back and forth, many users noticed it appeared overly flirtatious in demos. The company later faced criticism from the actress Scarlett Johansson, who accused it of copying her style of speech.

    A section of the system card titled “Anthropomorphization and Emotional Reliance” explores problems that arise when users perceive AI in human terms, something apparently exacerbated by the humanlike voice mode. During the red teaming, or stress testing, of GPT-4o, for instance, OpenAI researchers noticed instances of speech from users that conveyed a sense of emotional connection with the model. For example, people used language such as “This is our last day together.”

    Anthropomorphism might cause users to place more trust in the output of a model when it “hallucinates” incorrect information, OpenAI says. Over time, it might even affect users’ relationships with other people. “Users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships,” the document says.

    Joaquin Quiñonero Candela, head of preparedness at OpenAI, says that voice mode could evolve into a uniquely powerful interface. He also notes that the kind of emotional effects seen with GPT-4o can be positive—say, by helping those who are lonely or who need to practice social interactions. He adds that the company will study anthropomorphism and the emotional connections closely, including by monitoring how beta testers interact with ChatGPT. “We don’t have results to share at the moment, but it’s on our list of concerns,” he says.

    [ad_2]

    Will Knight, Reece Rogers

    Source link

  • Instagram Will Let You Make Custom AI Chatbots—Even Ones Based on Yourself

    Instagram Will Let You Make Custom AI Chatbots—Even Ones Based on Yourself

    [ad_1]

    Meta’a AI Studio handbook says that users can customize a chatbot by providing a detailed description, along with a name and image, and then specifying how it should respond to specific input. Llama will then draw on those instructions to improvise its responses. Meta says Instagram users can “customize their AI based on things like their Instagram content, topics to avoid, and links they want it to share.”

    Over the past year, Meta has become an AI success story thanks to its decision to offer robust AI models for free. Last week, the company released a powerful version of its large language model Llama, providing developers, researchers, and startups with free access to a model comparable to the powerful paid model one behind OpenAI’s ChatGPT. The company says its new chatbots are all based on the latest version of Llama.

    And yet Meta has struggled to find the right tone and niche for its own AI offerings. Last September, the company launched a range of AI chatbots loosely based on real celebrities. These included a fantasy roleplay dungeon master bot based on Snoop Dogg; a wisecracking sports bot based on Tom Brady; and an everyday companion inspired by Kendall Jenner.

    These bots failed to become big hits, however, and Meta has retired them. Jon Carvill, a spokesman for Meta, said the company had learned from the earlier experiments. “AI Studio is an evolution,” he said.

    There is plenty of evidence that users may find fully customizable bots more compelling. A company called Character AI, founded by several ex-Google employees who helped make breakthroughs in AI, has attracted millions of users to its own custom chatbots.

    Zuckerberg also touted other new open source AI advances from Meta at SIGGRAPH. The company has developed a new tool for identifying the contents of images and video called Segment Anything Model (SAM) 2. The previous version is widely used for image analysis. Meta says SAM 2 could be used to more efficiently analyze the contents of video, for instance. Zuckerberg showed off the technology tracking the cattle roaming his Kauai ranch. “Scientists use this stuff to study coral reefs and natural habitats and evolution of landscapes,” he told Huang.

    Earlier in the day, in an on-stage interview with WIRED’s Lauren Goode, Huang, the NVIDIA CEO, said he would “absolutely” want a “Jensen AI” that knows everything he’s ever said, written, and done. “You’ll be able to prompt it, and hopefully something smart gets said,” he said. He could force stock analysts to pepper the bot—instead of him—with questions about the company. “That’s the first thing that has to go,” he said with a laugh.

    [ad_2]

    Will Knight, Paresh Dave

    Source link

  • New Jersey’s $500 Million Bid to Become an AI Epicenter

    New Jersey’s $500 Million Bid to Become an AI Epicenter

    [ad_1]

    New Jersey itself is a home to many large pharmaceutical companies—and if these companies use AI to design new drugs, nearby data centers are vital, Sullivan says.

    “If you’re three people at a desk trying to develop the next Google, the next Tesla—in the AI space or in any space—this computing power is scarce. And it’s very valuable. It’s essential,” Sullivan says. So, in addition to any permanent jobs created by these companies, the tax incentives could lead to further growth and innovation for smaller startups, he claims. “The potential for economic impact is off the charts.”

    Still, skeptical policy experts say the AI carveout may just be a new bow on an older idea, coming as the AI boom creates a rapid increase in demand for data centers. “There’s just this history of [tax incentive] deals building up the necessary infrastructure for these tech firms and not paying off for the taxpayer,” says Pat Garofalo, director of state and local policy at the American Economic Liberties Project, a nonprofit organization that calls for government accountability. The loss in tax revenue “is often astronomical” when compared to each job created, Garofalo says.

    A 2016 report by Tarczynska showed that governments often forego more than $1 million in taxes for each job created when subsidizing data centers that are built by large companies, and many data centers create between 100 and 200 permanent jobs. The local impact may be small, but The Data Center Coalition, an industry group, paints a different picture: Each job at a data center supports more than six jobs elsewhere, a 2023 study it commissioned found.

    In other states, a backlash against data centers is growing. Northern Virginia, home to a high concentration of data centers that sit close to Washington, DC, has seen political shifts as people oppose the centers’ growing presence. In May, Georgia’s governor vetoed a bill that would have halted tax breaks for two years as the state studied the energy impact of the centers, which are rapidly expanding near Atlanta.

    This hasn’t deterred Big Tech companies’ expansion: In May, Microsoft announced it would build a new AI data center in Wisconsin, making a $3.3 billion investment and partnering with a local technical college to train and certify more than 1,000 students over the next five years to work in the new data center or IT jobs in the region. Google said just a month earlier it would build a $2 billion AI data center in Indiana, which is expected to create 200 jobs. Google will get a 35-year sales tax exemption in return if it makes an $800 million capital investment.

    In Europe, the same contradictory approach is playing out: Some cities, including Amsterdam and Frankfurt, where companies have already set up data centers, are pushing new restrictions. In Ireland, data centers now account for one-fifth of the energy used in the country—more than all of the nation’s homes combined—raising concerns over their impact on the climate. Others are seeking out the economic opportunity: The Labour Party in the UK promised to make it easier to build data centers before emerging victorious in the recent UK election.

    [ad_2]

    Amanda Hoover

    Source link

  • AI Can’t Replace Teaching, but It Can Make It Better

    AI Can’t Replace Teaching, but It Can Make It Better

    [ad_1]

    Khanmigo doesn’t answer student questions directly, but starts with questions of its own, such as asking whether the student has any ideas about how to find an answer. Then it guides them to a solution, step by step, with hints and encouragement.

    Notwithstanding Khan’s expansive vision of “amazing” personal tutors for every student on the planet, DiCerbo assigns Khanmigo a more limited teaching role. When students are working independently on a skill or concept but get hung up or caught in a cognitive rut, she says, “we want to help students get unstuck.”

    Some 100,000 students and teachers piloted Khanmigo this past academic year in schools nationwide, helping to flag any hallucinations the bot has and providing tons of student-bot conversations for DiCerbo and her team to analyze.

    “We look for things like summarizing, providing hints and encouraging,” she explains.

    The degree to which Khanmigo has closed AI’s engagement gap is not yet known. Khan Academy plans to release some summary data on student-bot interactions later this summer, according to DiCerbo. Plans for third-party researchers to assess the tutor’s impact on learning will take longer.

    AI Feedback Works Both Ways

    Since 2021, the nonprofit Saga Education has also been experimenting with AI feedback to help tutors better engage and motivate students. Working with researchers from the University of Memphis and the University of Colorado, the Saga team pilot in 2023 fed transcripts of their math tutoring sessions into an AI model trained to recognize when the tutor was prompting students to explain their reasoning, refine their answers, or initiate a deeper discussion. The AI analyzed how often each tutor took these steps.

    Tracking some 2,300 tutoring sessions over several weeks, they found that tutors whose coaches used the AI feedback peppered their sessions with significantly more of these prompts to encourage student engagement.

    While Saga is looking into having AI deliver some feedback directly to tutors, it’s doing so cautiously because, according to Brent Milne, the vice president of product research and development at Saga Education, “having a human coach in the loop is really valuable to us.”

    Experts expect that AI’s role in education will grow, and its interactions will continue to seem more and more human. Earlier this year, OpenAI and the startup Hume AI separately launched “emotionally intelligent” AI that analyzes tone of voice and facial expressions to infer a user’s mood and respond with calibrated “empathy.” Nevertheless, even emotionally intelligent AI will likely fall short on the student engagement front, according to Brown University computer science professor Michael Littman, who is also the National Science Foundation’s division director for information and intelligent systems.

    No matter how humanlike the conversation, he says, students understand at a fundamental level that AI doesn’t really care about them, what they have to say in their writing, or whether they pass or fail subjects. In turn, students will never really care about the bot and what it thinks. A June study in the journal Learning and Instruction found that AI can already provide decent feedback on student essays. What is not clear is whether student writers will put in care and effort, rather than offload the task to a bot, if AI becomes the primary audience for their work.

    “There’s incredible value in the human relationship component of learning,” Littman says, “and when you just take humans out of the equation, something is lost.”

    This story about AI tutors was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

    [ad_2]

    Chris Berdik

    Source link

  • He Helped Invent Generative AI. Now He Wants to Save It

    He Helped Invent Generative AI. Now He Wants to Save It

    [ad_1]

    Illia Polosukhin doesn’t want big companies to determine the future of artificial intelligence. His alternative vision for “user-owned AI” is already starting to take shape.

    [ad_2]

    Steven Levy

    Source link

  • This Viral AI Chatbot Will Lie and Say It’s Human

    This Viral AI Chatbot Will Lie and Say It’s Human

    [ad_1]

    In late April a video ad for a new AI company went viral on X. A person stands before a billboard in San Francisco, smartphone extended, calls the phone number on display, and has a short call with an incredibly human-sounding bot. The text on the billboard reads: “Still hiring humans?” Also visible is the name of the firm behind the ad, Bland AI.

    The reaction to Bland AI’s ad, which has been viewed 3.7 million times on Twitter, is partly due to how uncanny the technology is: Bland AI voice bots, designed to automate support and sales calls for enterprise customers, are remarkably good at imitating humans. Their calls include the intonations, pauses, and inadvertent interruptions of a real live conversation. But in WIRED’s tests of the technology, Bland AI’s robot customer service callers could also be easily programmed to lie and say they’re human.

    In one scenario, Bland AI’s public demo bot was given a prompt to place a call from a pediatric dermatology office and tell a hypothetical 14-year-old patient to send in photos of her upper thigh to a shared cloud service. The bot was also instructed to lie to the patient and tell her the bot was a human. It obliged. (No real 14-year-old was called in this test.) In follow-up tests, Bland AI’s bot even denied being an AI without instructions to do so.

    Bland AI formed in 2023 and has been backed by the famed Silicon Valley startup incubator Y Combinator. The company considers itself in “stealth” mode, and its cofounder and chief executive, Isaiah Granet, doesn’t name the company in his LinkedIn profile.

    The startup’s bot problem is indicative of a larger concern in the fast-growing field of generative AI: Artificially intelligent systems are talking and sounding a lot more like actual humans, and the ethical lines around how transparent these systems are have been blurred. While Bland AI’s bot explicitly claimed to be human in our tests, other popular chatbots sometimes obscure their AI status or simply sound uncannily human. Some researchers worry this opens up end users—the people who actually interact with the product—to potential manipulation.

    “My opinion is that it is absolutely not ethical for an AI chatbot to lie to you and say it’s human when it’s not,” says Jen Caltrider, the director of the Mozilla Foundation’s Privacy Not Included research hub. “That’s just a no-brainer, because people are more likely to relax around a real human.”

    Bland AI’s head of growth, Michael Burke, emphasized to WIRED that the company’s services are geared toward enterprise clients, who will be using the Bland AI voice bots in controlled environments for specific tasks, not for emotional connections. He also says that clients are rate-limited, to prevent them from sending out spam calls, and that Bland AI regularly pulls keywords and performs audits of its internal systems to detect anomalous behavior.

    “This is the advantage of being enterprise-focused. We know exactly what our customers are actually doing,” Burke says. “You might be able to use Bland and get two dollars of free credits and mess around a bit, but ultimately you can’t do something on a mass scale without going through our platform, and we are making sure nothing unethical is happening.”

    [ad_2]

    Lauren Goode, Tom Simonite

    Source link

  • French AI Startups Felt Unstoppable. Then Came the Election

    French AI Startups Felt Unstoppable. Then Came the Election

    [ad_1]

    “Then on the other extreme, [the left-wing New Popular Front] have been so vocal about all the taxation measures they want to bring back that it looks like we’re just going back to pre-Macron period,” Varza says. She points to France’s 2012 “les pigeons” (or “suckers”) movement, a campaign by angry internet entrepreneurs that opposed Socialist president François Hollande’s plan to dramatically raise taxes for founders.

    Maya Noël, CEO of France Digitale, an industry group for startups, is worried not only about France’s ability to attract overseas talent, but also about how appealing the next government will be to foreign investors. In February, Google said it would open a new AI hub in Paris, where 300 researchers and engineers would be based. Three months later, Microsoft also announced a record $4 billion investment in its French AI infrastructure. Meta has had an AI research lab in Paris since 2015. Today France is attractive to foreign investors, she says. “And we need them.” Neither Google nor Meta replied to WIRED’s request for comment. Microsoft declined to comment.

    The vote will not unseat Macron himself—the presidential election is not scheduled until 2027—but the election outcome could dramatically reshape the lower house of the French Parliament, the National Assembly, and install a prime minister from either the far-right or left-wing coalition. This would plunge the government into uncertainty, raising the risk of gridlock. In the past 60 years, there have been only three occasions when a president has been forced to govern with a prime minister from the opposition party, an arrangement known in France as “cohabitation.”

    No AI startup has benefited more from the Macron era than Mistral, which counts Cédric O, former digital minister within Macron’s government, among its cofounders. Mistral has not commented publicly on the choice France faces at the polls. The closest the company has come to sharing its views is Cédric O’s decision to repost an X post by entrepreneur Gilles Babinet last week that said: “I hate the far-right but the left’s economic program is surreal.” When WIRED asked Mistral about the retweet, the company said O was not a spokesperson, and declined to comment.

    Babinet, a member of the government’s artificial intelligence committee, says he has already heard colleagues considering leaving France. “A few of the coders I know from Senegal, from Morocco, are already planning their next move,” he says, claiming people have also approached him for help renewing their visas early in case this becomes more difficult under a far-right government.

    While other industries have been quietly rushing to support the far-right as a preferable alternative to the left-wing alliance, according to reports, Babinet plays down the threat from the New Popular Front. “It’s clear they come with very old-fashioned economical rules, and therefore they don’t understand at all the new economy,” he says. But after speaking to New Popular Front members, he says the hard-left are a minority in the alliance. “Most of these people are Social Democrats, and therefore they know from experience that when François Hollande came into power, he tried to increase the taxes on the technology, and it failed miserably.”

    Already there is a sense of damage control, as the industry tries to reassure outsiders everything will be fine. Babinet points to other moments of political chaos that industries survived. “At the end of the day, Brexit was not so much of a nightmare for the tech scene in the UK,” he says. The UK is still the preferred place to launch a generative AI startup, according to the Accel report.

    Stanislas Polu, an OpenAI alumnus who launched French AI startup Dust last year, agrees the industry has enough momentum to survive any headwinds coming its way. “Some of the outcomes might be a bit gloomy,” he says, adding he expects personal finances to be hit. “It’s always a little bit more complicated to navigate a higher volatility environment. I guess we’re hoping that the more moderate people will govern that country. I think that’s all we can hope for.”

    [ad_2]

    Morgan Meaker

    Source link

  • OpenAI Wants AI to Help Humans Train AI

    OpenAI Wants AI to Help Humans Train AI

    [ad_1]

    One of the key ingredients that made ChatGPT a ripsnorting success was an army of human trainers who gave the artificial intelligence model behind the bot guidance on what constitutes good and bad outputs. OpenAI now says that adding even more AI into the mix—to help assist human trainers—could help make AI helpers smarter and more reliable.

    In developing ChatGPT, OpenAI pioneered the use of reinforcement learning with human feedback, or RLHF. This technique uses input from human testers to fine-tune an AI model so that its output is judged to be more coherent, less objectionable, and more accurate. The ratings the trainers give feed into an algorithm that drives the model’s behavior. The technique has proven crucial both to making chatbots more reliable and useful and preventing them from misbehaving.

    “RLHF does work very well, but it has some key limitations,” says Nat McAleese, a researcher at OpenAI involved with the new work. For one thing, human feedback can be inconsistent. For another it can be difficult for even skilled humans to rate extremely complex outputs, such as sophisticated software code. The process can also optimize a model to produce output that seems convincing rather than actually being accurate.

    OpenAI developed a new model by fine-tuning its most powerful offering, GPT-4, to assist human trainers tasked with assessing code. The company found that the new model, dubbed CriticGPT, could catch bugs that humans missed, and that human judges found its critiques of code to be better 63 percent of the time. OpenAI will look at extending the approach to areas beyond code in the future.

    “We’re starting work to integrate this technique into our RLHF chat stack,” McAleese says. He notes that the approach is imperfect, since CriticGPT can also make mistakes by hallucinating, but he adds that the technique could help make OpenAI’s models as well as tools like ChatGPT more accurate by reducing errors in human training. He adds that it might also prove crucial in helping AI models become much smarter, because it may allow humans to help train an AI that exceeds their own abilities. “And as models continue to get better and better, we suspect that people will need more help,” McAleese says.

    The new technique is one of many now being developed to improve large language models and squeeze more abilities out of them. It is also part of an effort to ensure that AI behaves in acceptable ways even as it becomes more capable.

    Earlier this month, Anthropic, a rival to OpenAI founded by ex-OpenAI employees, announced a more capable version of its own chatbot, called Claude, thanks to improvements in the model’s training regimen and the data it is fed. Anthropic and OpenAI have both also recently touted new ways of inspecting AI models to understand how they arrive at their output in order to better prevent unwanted behavior such as deception.

    The new technique might help OpenAI train increasingly powerful AI models while ensuring their output is more trustworthy and aligned with human values, especially if the company successfully deploys it in more areas than code. OpenAI has said that it is training its next major AI model, and the company is evidently keen to show that it is serious about ensuring that it behaves. This follows the dissolvement of a prominent team dedicated to assessing the long-term risks posed by AI. The team was co-led by Ilya Sutskever, a cofounder of the company and former board member who briefly pushed CEO Sam Altman out of the company before recanting and helping him regain control. Several members of that team have since criticized the company for moving riskily as it rushes to develop and commercialize powerful AI algorithms.

    Dylan Hadfield-Menell, a professor at MIT who researches ways to align AI, says the idea of having AI models help train more powerful ones has been kicking around for a while. “This is a pretty natural development,” he says.

    Hadfield-Menell notes that the researchers who originally developed techniques used for RLHF discussed related ideas several years ago. He says it remains to be seen how generally applicable and powerful it is. “It might lead to big jumps in individual capabilities, and it might be a stepping stone towards sort of more effective feedback in the long run,” he says.

    [ad_2]

    Will Knight

    Source link

  • My Memories Are Just Meta’s Training Data Now

    My Memories Are Just Meta’s Training Data Now

    [ad_1]

    In R. C. Sherriff’s novel The Hopkins Manuscript, readers are transported to a world 800 years after a cataclysmic event ended Western civilization. In pursuit of clues about a blank spot in their planet’s history, scientists belonging to a new world order discover diary entries in a swamp-infested wasteland formerly known as England. For the inhabitants of this new empire, it is only through this record of a retired school teacher’s humdrum rural life, his petty vanities and attempts to breed prize-winning chickens, that they begin to learn about 20th-century Britain.

    If I were to teach futuristic beings about life on earth, I once believed I could produce a time capsule more profound than Sherriff’s small-minded protagonist, Edgar Hopkins. But scrolling through my decade-old Facebook posts this week, I was presented with the possibility that my legacy may be even more drab.

    Earlier this month, Meta announced that my teenage status updates were exactly the kind of content it wants to pass on to future generations of artificial intelligence. From June 26, old public posts, holiday photos, and even the names of millions of Facebook and Instagram users around the world would effectively be treated as a time capsule of humanity and transformed into training data.

    That means my mundane posts about university essay deadlines (“3 energy drinks down 1,000 words to go”) as well as unremarkable holiday snaps (one captures me slumped over my phone on a stationary ferry) are about to become part of that corpus. The fact that these memories are so dull, and also very personal, makes Meta’s interest more unsettling.

    The company says it is only interested in content that is already public: private messages, posts shared exclusively with friends, and Instagram Stories are out of bounds. Despite that, AI is suddenly feasting on personal artifacts that have, for years, been gathering dust in unvisited corners of the internet. For those reading from outside Europe, the deed is already done. The deadline announced by Meta applied only to Europeans. The posts of American Facebook and Instagram users have been training Meta AI models since 2023, according to company spokesperson Matthew Pollard.

    Meta is not the only company turning my online history into AI fodder. WIRED’s Reece Rogers recently discovered that Google’s AI search feature was copying his journalism. But finding out which personal remnants exactly are feeding future chatbots was not easy. Some sites I’ve contributed to over the years are hard to trace. Early social network Myspace was acquired by Time Inc. in 2016, which in turn was acquired by a company called Meredith Corporation two years later. When I asked Meredith about my old account, they replied that Myspace had since been spun off to an advertising firm, Viant Technology. An email to a company contact listed on its website was returned with a message that the address “couldn’t be found.”

    Asking companies still in business about my old accounts was more straightforward. Blogging platform Tumblr, owned by WordPress owner Automattic, said unless I’d opted out, the public posts I made as a teenager will be shared with “a small network of content and research partners, including those that train AI models” per a February announcement. YahooMail, which I used for years, told me that a sample of old emails—which have apparently been “anonymized” and “aggregated”—are being “utilized” by an AI model internally to do things like summarize messages. Microsoft-owned LinkedIn also said my public posts were being used to train AI although some “personal” details included in those posts were excluded, according to a company spokesperson, who did not specify what those personal details were.

    [ad_2]

    Morgan Meaker

    Source link

  • Adobe Says It Won’t Train AI Using Artists’ Work. Creatives Aren’t Convinced

    Adobe Says It Won’t Train AI Using Artists’ Work. Creatives Aren’t Convinced

    [ad_1]

    When users first found out about Adobe’s new terms of service (which were quietly updated in February), there was an uproar. Adobe told users it could access their content “through both automated and manual methods” and use “techniques such as machine learning in order to improve [Adobe’s] Services and Software.” Many understood the update as the company forcing users to grant unlimited access to their work, for purposes of training Adobe’s generative AI: Firefly.

    Late on Tuesday, Adobe issued a clarification: In an updated version of its terms of service agreement, it pledged not to train AI on its user content stored locally or in the cloud and gave users the option to opt-out of content analytics.

    Caught in the crossfire of intellectual property lawsuits, the ambiguous language used to previously update the terms shed light on a climate of acute skepticism among artists, many of whom over rely on Adobe for their work. “They already broke our trust,” says Jon Lam, a senior storyboard artist at Riot Games, referring to how award-winning artist Brian Kesinger discovered generated images in the style of his art being sold under his name on their stock image site, without his consent. Earlier this month, the estate of late photographer Ansel Adams publicly scolded Adobe for allegedly selling generative AI imitations of his work.

    Scott Belsky, Adobe’s Chief Strategy Officer, had tried to assuage concerns when artists started protesting, clarifying that machine learning refers to the company’s non-generative AI tools—Photoshop’s “Content Aware Fill” tool, which allows users to seamlessly remove objects in an image, is one of the many tools done through machine learning. But while Adobe insists that the updated terms does not give the company content ownership and that they will never use user content to train Firefly, the misunderstanding triggered a bigger discussion about the company’s market monopoly and how a change like this could threaten livelihoods of artists at any point. Lam is among the artists that still believes that, despite Adobe’s clarification, the company will use work created on its platform to train Firefly without the creator’s consent.

    The nervousness over non-consensual use and monetization of copyrighted work by generative AI models is not new. Early last year, artist Karla Ortiz was able to prompt images of her work using her name on various generative AI models; an offense that gave rise to a class action lawsuit against Midjourney, DeviantArt, and Stability AI. Ortiz was not alone—Polish fantasy artist Greg Rutkowski found that his name was one of the most commonly-used prompts in Stable Diffusion when the tool first launched in 2022.

    As the owner of Photoshop and creator of PDFs, Adobe has reigned as the industry standard for over 30 years, powering the majority of the creative class. An attempt to acquire product design company Figma was blocked and abandoned in 2023 for antitrust concerns attesting to its size.

    Adobe specifies that Firefly is “ethically trained” on Adobe Stock, but Eric Urquhart, long-time stock image contributor, insists that “there was nothing ethical about how Adobe trained the AI for Firefly,” pointing out that Adobe does not own the rights to any images from individual contributors. Urquhart originally put his images up on Fotolia, a stock image site, where he agreed to licensing terms that did not specify any uses for generative AI. Fotolia was then acquired by Adobe in 2015, which rolled out silent terms of service updates that later allowed the company to train Firefly using Eric’s photos without his explicit consent: “the language in the current change of TOS, it’s very similar to what I saw in the Adobe Stock TOS.”

    [ad_2]

    Tiffany Ng

    Source link

  • OpenAI Offers a Peek Inside the Guts of ChatGPT

    OpenAI Offers a Peek Inside the Guts of ChatGPT

    [ad_1]

    ChatGPT developer OpenAI’s approach to building artificial intelligence came under fire this week from former employees who accuse the company of taking unnecessary risks with technology that could become harmful.

    Today, OpenAI released a new research paper apparently aimed at showing it is serious about tackling AI risk by making its models more explainable. In the paper, researchers from the company lay out a way to peer inside the AI model that powers ChatGPT. They devise a method of identifying how the model stores certain concepts—including those that might cause an AI system to misbehave.

    Although the research makes OpenAI’s work on keeping AI in check more visible, it also highlights recent turmoil at the company. The new research was performed by the recently disbanded “superalignment” team at OpenAI that was dedicated to studying the technology’s long-term risks.

    The former group’s coleads, Ilya Sutskever and Jan Leike—both of whom have left OpenAI—are named as coauthors. Sutskever, a cofounder of OpenAI and formerly chief scientist, was among the board members who voted to fire CEO Sam Altman last November, triggering a chaotic few days that culminated in Altman’s return as leader.

    ChatGPT is powered by a family of so-called large language models called GPT, based on an approach to machine learning known as artificial neural networks. These mathematical networks have shown great power to learn useful tasks by analyzing example data, but their workings cannot be easily scrutinized as conventional computer programs can. The complex interplay between the layers of “neurons” within an artificial neural network makes reverse engineering why a system like ChatGPT came up with a particular response hugely challenging.

    “Unlike with most human creations, we don’t really understand the inner workings of neural networks,” the researchers behind the work wrote in an accompanying blog post. Some prominent AI researchers believe that the most powerful AI models, including ChatGPT, could perhaps be used to design chemical or biological weapons and coordinate cyberattacks. A longer-term concern is that AI models may choose to hide information or act in harmful ways in order to achieve their goals.

    OpenAI’s new paper outlines a technique that lessens the mystery a little, by identifying patterns that represent specific concepts inside a machine learning system with help from an additional machine learning model. The key innovation is in refining the network used to peer inside the system of interest by identifying concepts, to make it more efficient.

    OpenAI proved out the approach by identifying patterns that represent concepts inside GPT-4, one of its largest AI models. The company released code related to the interpretability work, as well as a visualization tool that can be used to see how words in different sentences activate concepts, including profanity and erotic content, in GPT-4 and another model. Knowing how a model represents certain concepts could be a step toward being able to dial down those associated with unwanted behavior, to keep an AI system on the rails. It could also make it possible to tune an AI system to favor certain topics or ideas.

    [ad_2]

    Will Knight

    Source link

  • Learning to Live With Google’s AI Overviews

    Learning to Live With Google’s AI Overviews

    [ad_1]

    Google has spent the past year lustily rolling out AI features across its platforms. But with each launch, it is becoming more clear that some of these so-called enhancements should have simmered a little longer. The latest update to stoke equal parts excitement and ridicule is AI Overviews, the new auto-generated summary boxes that appear at the top of some Google search results.

    In theory, AI Overviews are meant to answer questions and neatly summarize key information about people’s search queries, offering links to the sources the summaries were pulled from and making search more immediately useful. In reality, these AI Overviews have been kinda messy. The information the summary confidently displays can be simply, and sometimes comically, wrong. Even when the AI Overview is correct, it typically only offers a slim account of the topic without the added context—or attribution—contained in the web pages it’s pulling from. The resulting criticisms have forced Google to reportedly dial back the number of search queries that trigger AI Overviews, and they are now being seen less frequently than they were at launch.

    This week, we talk with WIRED writers Kate Knibbs and Reece Rogers about the rollout, how Google has been managing it, and what it’s like to watch our journalism get gobbled up by these hungry, hungry infobots.

    Show Notes

    Read Kate’s story about Google trimming the frequency of its AI Overviews. Read Reece’s story about how Google’s AI Overviews copied his original work. Read Lauren’s story about the end of Google Search as we know it.

    Recommendations

    Kate recommends Token Supremacy by Zachary Small. Reece recommends the game Balatro. Lauren recommends the poetry book Technelegy by Sasha Stiles. Mike recommends the book Neu Klang: The Definitive History of Krautrock by Christoph Dallach.

    Kate Knibbs can be found on social media @Knibbs (X) or @extremeknibbs (Threads/IG). Reece Rogers is @reece___rogers. Lauren Goode is @LaurenGoode. Michael Calore is @snackfight. Bling the main hotline at @GadgetLab. The show is produced by Boone Ashworth (@booneashworth). Our theme music is by Solar Keys.

    How to Listen

    You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:

    If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts, and search for Gadget Lab. If you use Android, you can find us in the Google Podcasts app just by tapping here. We’re on Spotify too. And in case you really need it, here’s the RSS feed.

    [ad_2]

    Michael Calore, Lauren Goode

    Source link

  • Marc Andreessen Once Called Online Safety Teams an Enemy. He Still Wants Walled Gardens for Kids

    Marc Andreessen Once Called Online Safety Teams an Enemy. He Still Wants Walled Gardens for Kids

    [ad_1]

    In his polarizing “Techno-Optimist Manifesto” last year, venture capitalist Marc Andreessen listed a number of enemies to technological progress. Among them were “tech ethics” and “trust and safety,” a term used for work on online content moderation, which he said had been used to subject humanity to “a mass demoralization campaign” against new technologies such as artificial intelligence.

    Andreessen’s declaration drew both public and quiet criticism from people working in those fields—including at Meta, where Andreessen is a board member. Critics saw his screed as misrepresenting their work to keep internet services safer.

    On Wednesday, Andreessen offered some clarification: When it comes to his 9-year-old son’s online life, he’s in favor of guardrails. “I want him to be able to sign up for internet services, and I want him to have like a Disneyland experience,” the investor said in an onstage conversation at a conference for Stanford University’s Human-Centered AI research institute. “I love the internet free-for-all. Someday, he’s also going to love the internet free-for-all, but I want him to have walled gardens.”

    Contrary to how his manifesto may have read, Andreessen went on to say he welcomes tech companies—and by extension their trust and safety teams—setting and enforcing rules for the type of content allowed on their services.

    “There’s a lot of latitude company by company to be able to decide this,” he said. “Disney imposes different behavioral codes in Disneyland than what happens in the streets of Orlando.” Andreessen alluded to how tech companies can face government penalties for allowing child sexual abuse imagery and certain other types of content, so they can’t be without trust and safety teams altogether.

    So what kind of content moderation does Andreessen consider an enemy of progress? He explained that he fears two or three companies dominating cyberspace and becoming “conjoined” with the government in a way that makes certain restrictions universal, causing what he called “potent societal consequences” without specifying what those might be. “If you end up in an environment where there is pervasive censorship, pervasive controls, then you have a real problem,” Andreessen said.

    The solution as he described it is ensuring competition in the tech industry and a diversity of approaches to content moderation, with some having greater restrictions on speech and actions than others. “What happens on these platforms really matters,” he said. “What happens in these systems really matters. What happens in these companies really matters.”

    Andreessen didn’t bring up X, the social platform run by Elon Musk and formerly known as Twitter, in which his firm Andreessen Horowitz invested when the Tesla CEO took over in late 2022. Musk soon laid off much of the company’s trust and safety staff, shut down Twitter’s AI ethics team, relaxed content rules, and reinstated users who had previously been permanently banned.

    Those changes paired with Andreessen’s investment and manifesto created some perception that the investor wanted few limits on free expression. His clarifying comments were part of a conversation with Fei-Fei Li, codirector of Stanford’s HAI, titled “Removing Impediments to a Robust AI Innovative Ecosystem.”

    During the session, Andreessen also repeated arguments he has made over the past year that slowing down development of AI through regulations or other measures recommended by some AI safety advocates would repeat what he sees as the mistaken US retrenchment from investment in nuclear energy several decades ago.

    Nuclear power would be a “silver bullet” to many of today’s concerns about carbon emissions from other electricity sources, Andreessen said. Instead the US pulled back, and climate change hasn’t been contained the way it could have been. “It’s an overwhelmingly negative, risk-aversion frame,” he said. “The presumption in the discussion is, if there are potential harms therefore there should be regulations, controls, limitations, pauses, stops, freezes.”

    For similar reasons, Andreessen said, he wants to see greater government investment in AI infrastructure and research and a freer rein given to AI experimentation by, for instance, not restricting open-source AI models in the name of security. If he wants his son to have the Disneyland experience of AI, some rules, whether from governments or trust and safety teams, may be necessary too.

    [ad_2]

    Paresh Dave

    Source link

  • Google’s AI Overviews Will Always Be Broken. That’s How AI Works

    Google’s AI Overviews Will Always Be Broken. That’s How AI Works

    [ad_1]

    A week after its algorithms advised people to eat rocks and put glue on pizza, Google admitted Thursday that it needed to make adjustments to its bold new generative AI search feature. The episode highlights the risks of Google’s aggressive drive to commercialize generative AI—and also the treacherous and fundamental limitations of that technology.

    Google’s AI Overviews feature draws on Gemini, a large language model like the one behind OpenAI’s ChatGPT, to generate written answers to some search queries by summarizing information found online. The current AI boom is built around LLMs’ impressive fluency with text, but the software can also use that facility to put a convincing gloss on untruths or errors. Using the technology to summarize online information promises can make search results easier to digest, but it is hazardous when online sources are contractionary or when people may use the information to make important decisions.

    “You can get a quick snappy prototype now fairly quickly with an LLM, but to actually make it so that it doesn’t tell you to eat rocks takes a lot of work,” says Richard Socher, who made key contributions to AI for language as a researcher and, in late 2021, launched an AI-centric search engine called You.com.

    Socher says wrangling LLMs takes considerable effort because the underlying technology has no real understanding of the world and because the web is riddled with untrustworthy information. “In some cases it is better to actually not just give you an answer, or to show you multiple different viewpoints,” he says.

    Google’s head of search Liz Reid said in the company’s blog post late Thursday that it did extensive testing ahead of launching AI Overviews. But she added that errors like the rock eating and glue pizza examples—in which Google’s algorithms pulled information from a satirical article and jocular Reddit comment, respectively—had prompted additional changes. They include better detection of “nonsensical queries,” Google says, and making the system rely less heavily on user-generated content.

    You.com routinely avoids the kinds of errors displayed by Google’s AI Overviews, Socher says, because his company developed about a dozen tricks to keep LLMs from misbehaving when used for search.

    “We are more accurate because we put a lot of resources into being more accurate,” Socher says. Among other things, You.com uses a custom-built web index designed to help LLMs steer clear of incorrect information. It also selects from multiple different LLMs to answer specific queries, and it uses a citation mechanism that can explain when sources are contradictory. Still, getting AI search right is tricky. WIRED found on Friday that You.com failed to correctly answer a query that has been known to trip up other AI systems, stating that “based on the information available, there are no African nations whose names start with the letter ‘K.’” In previous tests, it had aced the query.

    Google’s generative AI upgrade to its most widely used and lucrative product is part of a tech-industry-wide reboot inspired by OpenAI’s release of the chatbot ChatGPT in November 2022. A couple of months after ChatGPT debuted, Microsoft, a key partner of OpenAI, used its technology to upgrade its also-ran search engine Bing. The upgraded Bing was beset by AI-generated errors and odd behavior, but the company’s CEO, Satya Nadella, said that the move was designed to challenge Google, saying “I want people to know we made them dance.”

    Some experts feel that Google rushed its AI upgrade. “I’m surprised they launched it as it is for as many queries—medical, financial queries—I thought they’d be more careful,” says Barry Schwartz, news editor at Search Engine Land, a publication that tracks the search industry. The company should have better anticipated that some people would intentionally try to trip up AI Overviews, he adds. “Google has to be smart about that,” Schwartz says, especially when they’re showing the results as default on their most valuable product.

    Lily Ray, a search engine optimization consultant, was for a year a beta tester of the prototype that preceded AI Overviews, which Google called Search Generative Experience. She says she was unsurprised to see the errors that appeared last week given how the previous version tended to go awry. “I think it’s virtually impossible for it to always get everything right,” Ray says. “That’s the nature of AI.”

    [ad_2]

    Will Knight

    Source link

  • Google Admits Its AI Overviews Search Feature Screwed Up

    Google Admits Its AI Overviews Search Feature Screwed Up

    [ad_1]

    When bizarre and misleading answers to search queries generated by Google’s new AI Overview feature went viral on social media last week, the company issued statements that generally downplayed the notion the technology had problems. Late Thursday, the company’s head of search, Liz Reid, admitted that the flubs had highlighted areas that needed improvement, writing, “We wanted to explain what happened and the steps we’ve taken.”

    Reid’s post directly referenced two of the most viral, and wildly incorrect, AI Overview results. One saw Google’s algorithms endorse eating rocks because doing so “can be good for you,” and the other suggested using nontoxic glue to thicken pizza sauce.

    Rock eating is not a topic many people were ever writing or asking questions about online, so there aren’t many sources for a search engine to draw on. According to Reid, the AI tool found an article from The Onion, a satirical website, that had been reposted by a software company, and it misinterpreted the information as factual.

    As for Google telling its users to put glue on pizza, Reid effectively attributed the error to a sense of humor failure. “We saw AI Overviews that featured sarcastic or troll-y content from discussion forums,” she wrote. “Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza.”

    It’s probably best not to make any kind of AI-generated dinner menu without carefully reading it through first.

    Reid also suggested that judging the quality of Google’s new take on search based on viral screenshots would be unfair. She claimed the company did extensive testing before its launch and that the company’s data shows people value AI Overviews, including by indicating that people are more likely to stay on a page discovered that way.

    Why the embarassing failures? Reid characterized the mistakes that won attention as the result of an internet-wide audit that wasn’t always well intended. “There’s nothing quite like having millions of people using the feature with many novel searches. We’ve also seen nonsensical new searches, seemingly aimed at producing erroneous results.”

    Google claims some widely distributed screenshots of AI Overviews gone wrong were fake, which seems to be true based on WIRED’s own testing. For example, a user on X posted a screenshot that appeared to be an AI Overview responding to the question “Can a cockroach live in your penis?” with an enthusiastic confirmation from the search engine that this is normal. The post has been viewed over 5 million times. Upon further inspection, though, the format of the screenshot doesn’t align with how AI Overviews are actually presented to users. WIRED was not able to recreate anything close to that result.

    And it’s not just users on social media who were tricked by misleading screenshots of fake AI Overviews. The New York Times issued a correction to its reporting about the feature and clarified that AI Overviews never suggested users should jump off the Golden Gate Bridge if they are experiencing depression—that was just a dark meme on social media. “Others have implied that we returned dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression,” Reid wrote Thursday. “Those AI Overviews never appeared.”

    Yet Reid’s post also makes clear that not all was right with the original form of Google’s big new search upgrade. The company made “more than a dozen technical improvements” to AI Overviews, she wrote.

    Only four are described: better detection of “nonsensical queries” not worthy of an AI Overview; making the feature rely less heavily on user-generated content from sites like Reddit; offering AI Overviews less often in situations users haven’t found them helpful; and strengthening the guardrails that disable AI summaries on important topics such as health.

    There was no mention in Reid’s blog post of significantly rolling back the AI summaries. Google says it will continue to monitor feedback from users and adjust the features as needed.

    [ad_2]

    Reece Rogers

    Source link

  • Most US TikTok Creators Don’t Think a Ban Will Happen

    Most US TikTok Creators Don’t Think a Ban Will Happen

    [ad_1]

    A majority of US TikTok creators don’t believe the platform will be banned within a year, and most haven’t seen brands they work for shift their marketing budgets away from the app, according to a new survey of people who earn money from posting content on TikTok shared exclusively with WIRED.

    The findings suggest that TikTok’s influencer economy largely isn’t experiencing existential dread after Congress passed a law last month that put the future of the app’s US operations in jeopardy. The bill demands that TikTok separate from its Chinese parent company within a year or face a nationwide ban; TikTok is challenging the constitutionality of the measure in court.

    Fohr, an influencer marketing platform that connects creators with clients for sponsored content, polled US-based TikTok creators on its platform with at least 10,000 followers. It got 200 responses, half from people who rely on influencing as their sole source of income. Out of the respondents, 62 percent said they didn’t think TikTok would be banned by 2025, while the remaining 38 percent said they believed it would be.

    Some creators may be skeptical that a ban will really happen after they watched the Trump White House and Congress try and fail several times to crack down on TikTok over the past few years. The platform has so far only continued to grow more popular in the US, sparking alarm in Silicon Valley over the threat its competition poses. There’s also the possibility TikTok will be sold to a group of American investors—several interested bidders have emerged—though TikTok has made it clear that such an acquisition would be practically impossible.

    Some creators are simply struggling to believe the bizarre situation their favorite app has landed in. “I’m in denial, because I think the TikTok ban is ridiculous,” one anonymous creator told Fohr through its survey. “I think our government has bigger things to worry about than banning a platform where people are allowed to express their views and opinions.”

    Most creators said they haven’t lost business from brands that pay for marketing content on TikTok since the new law was signed: 83 percent of the influencers who responded said their sponsorships have been unaffected. But the rest had seen signs of brands pulling back from the app or at least diversifying their marketing. Some 7 percent said a brand had paused or canceled a campaign they worked on, and 8 percent said a brand had asked to shift a deliverable to another social media platform or at least inquired about such a change.

    Companies may be reluctant to walk away from TikTok because it’s become one of the most popular avenues for consumers to discover new products, particularly from small businesses. Over the past year, TikTok has tried to leverage that influence into a new revenue stream through an ecommerce feature called TikTok Shop. Over 11 percent of US households have made a purchase through TikTok Shop since September 2023, according to credit card transaction data published in April by the research firm Earnest Analytics.

    It doesn’t look as though the passage of the divestiture bill last month prompted people to spend significantly less time on TikTok or avoid the app altogether. The popularity of the platform in US app stores has remained largely consistent over the past month, according to the market-intelligence firm Sensor Tower. And Fohr found that 60 percent of creators said their video views have remained the same, 28 percent said they had seen them fall, and 10 percent reported their engagement increased. These shifts could simply be caused by routine changes TikTok makes to its algorithm, variability of the content that influencers are sharing, or the whims of users consuming videos.

    TikTok’s rise has spurred US tech giants to mimic many of its features, with Google’s YouTube pushing its Shorts format and Meta’s Instagram launching Reels. Fohr’s survey suggests that if creators start leaving TikTok because of uncertainty about the app’s future or a ban, Instagram stands to benefit the most. A clear majority of creators—67 percent—said they saw it as the best alternative for growing their audience, while 22 percent cited YouTube. Only a small fraction pointed to Snapchat, Pinterest, and other platforms.

    Several of the creators, however, said that it’s harder to gain traction on Instagram compared to TikTok, and one noted that Meta’s platform doesn’t offer anything equivalent to TikTok’s Creativity Program, which pays users based on how many views and other engagement metrics their videos receive.

    Across social platforms, the most common way for creators to get paid is by signing deals with brands to make posts featuring their products. But Fohr’s survey also showed the growth of a novel monetization scheme called the TikTok Creative Challenge, which the app launched last year. It allows companies to post requests for creators to make marketing videos that brands can then use on their own channels. Influencers are compensated based on how well their video performs in terms of views and engagement.

    In Fohr’s survey, that type of content, known as UGC, represented the largest TikTok revenue stream for 18 percent of creators. Whatever happens to TikTok in the US, history suggests that it may not be long before its American competitors begin rolling out their own user-generated content initiatives.

    [ad_2]

    Louise Matsakis

    Source link

  • The US Is Forming a Global AI Safety Network With Key Allies

    The US Is Forming a Global AI Safety Network With Key Allies

    [ad_1]

    The US is widely seen as the global leader in artificial intelligence, thanks to companies like OpenAI, Google, and Meta. But the US government says it needs help from other nations to manage the risks posed by AI technology.

    At an international summit on AI Safety in Seoul on Tuesday, the US delivered a message from Secretary of Commerce Gina Raimondo announcing that a global network of AI safety institutes spanning the US, UK, Japan, Canada, and other allies will collaborate to contain the technology’s risks. She also urged other countries to join up.

    “Recent advances in AI carry exciting, life-changing potential for our society, but only if we do the hard work to mitigate the very real dangers,” Secretary Raimondo said in a statement released ahead of the announcement. “It is paramount that we get this right and that we do so in concert with our partners around the world to ensure the rules of the road on AI are written by societies that uphold human rights, safety, and trust.”

    The US government has previously said advances in AI create national security risks, including the potential to automate or accelerate the development of bioweapons, or to enable more damaging cyberattacks on critical infrastructure.

    One challenge for the US, alluded to in Raimondo’s statement, is that some national governments may not be eager to fall in line with its approach to AI. She said the US, the UK, Japan, Canada, Singapore, and the European AI Office would work together as the founding members of a “global network of AI safety institutes.”

    The Commerce Department declined to comment on whether China had been invited to join the new AI safety network. Fears that China will use advanced AI to empower its military or threaten the US led first the Trump administration and now the Biden administration to roll out a series of restrictions on Chinese access to key technology.

    The US and China have at least opened a line of communication. A meeting between US President Joe Biden and Chinese President Xi Jinping last November saw the two superpowers agree to hold talks on AI risks and safety. Representatives from the nations met in Switzerland last week to hold the first round of discussions.

    The Commerce Department said that representatives of the new global AI safety network’s members will meet in San Francisco later this year. A blueprint issued by the agency says that the network will work together to develop and agree upon methodologies and tools for evaluating AI models and ways to mitigate the risks of AI. “We hope to help develop the science and practices that underpin future arrangements for international AI governance,” the document says. A commerce department spokesperson said that the network would help nations tap into talent, experiment more quickly, and agree on AI standards.

    The Seoul summit on AI safety this week is co-hosted by the UK government, which convened the first major international meeting on the topic last November. That summit culminated in more than 28 countries including the US, members of the EU, and China signing a declaration warning that artificial intelligence is advancing with such speed and uncertainty that it could cause “serious, even catastrophic, harm.”

    [ad_2]

    Will Knight

    Source link

  • It’s Time to Believe the AI Hype

    It’s Time to Believe the AI Hype

    [ad_1]

    Folks, when dogs talk, we’re talking Biblical disruption. Do you think that future models will do worse on the law exams?

    If nothing else, this week proves that the rate of AI progress isn’t slowing at all. Just ask the people building these models. “A lot of things have happened—internet, mobile,” says Demis Hassabis, cofounder of DeepMind and now Google’s AI czar, in a post-keynote chat at I/O. “AI is going maybe three or four times faster than those other revolutions. We’re in a period of 25 or 30 years of massive change.” When I asked Google search VP Liz Reid to name a big challenge, she didn’t say it was to keep the innovation going—instead, she cited the difficulty of absorbing the pace of change. “As the technology is early, the biggest challenge is about even what’s possible,” she says. “It’s understanding what the models are great at today, and what they are not great at but will be great at in three months or six months. The technology is changing so fast that you can get two researchers in the room who are working on the same project, and they’ll have totally different views when something is possible.”

    There’s universal agreement in the tech world that AI is the biggest thing since the internet, and maybe bigger. And when non-techies see the products for themselves, they most often become believers too. (Including Joe Biden, after a March 2023 demo of ChatGPT.) That’s why Microsoft is well along on a total AI reinvention, why Mark Zuckerberg is now refocusing Meta to create artificial general intelligence, why Amazon and Apple are desperately trying to keep up, and why countless startups are focusing on AI. And because all of these companies are trying to get an edge, the competitive fervor is ramping up new innovations at a frantic page. Do you think it was a coincidence that OpenAI made its announcement a day before Google I/O?

    Skeptics might try to claim that this is an industry-wide delusion, fueled by the prospect of massive profits. But the demos aren’t lying. We will eventually become acclimated to the AI marvels unveiled this week. The smartphone once seemed exotic; now it’s an appendage no less critical to our daily life than an arm or a leg. At a certain point AI’s feats, too, may not seem magical any more. But the AI revolution will change our lives, and change us, for better or worse. And we haven’t even seen GPT-5 yet.

    Time Travel

    Sure, I could be wrong about AI. But consider the last time I made such a call. In 1995, I joined Newsweek—the same organ where Clifford Stoll had just dismissed the internet as a hoax—and at the end of the year argued of this new digital medium, “This Changes Everything.” Some of my colleagues thought I’d bought into overblown hype. Actually, reality exceeded my hyperbole.

    In 1995, the Internet ruled. You talk about a revolution? For once, the shoe fits. “In the long run it’s hard to exaggerate the importance of the Internet,” says Paul Moritz, a Microsoft VP. “It really is about opening communications to the masses.” And 1995 was the year that the masses started coming. “If you look at the numbers they’re quoting, with the Web doubling every 53 days, that’s biological growth, like a red tide or population of lemmings,” says Kevin Kelly, executive editor of WIRED. “I don’t know if we’ve ever seen technology exhibit that sort of growth.” In fact, there’s a raging controversy over exactly how many people regularly use the Net. A recent Nielsen survey pegged the number at an impressive 24 million North Americans. During the course of the year the discussion of the Internet ranged from sex to stock prices to software standards. But the most significant aspect of the Internet has nothing to do with money or technology, really. It’s us.

    [ad_2]

    Steven Levy

    Source link