ReportWire

Tag: machine learning

  • State to use AI to improve government

    [ad_1]

    BOSTON — Artificial intelligence is being used for everything from guiding self-powered cars and developing life-saving medicines to powering online search engines that help you find a plumber or pick holiday gifts for your family.

    And the machine learning platform could soon be employed by the state government to speed up the processes of getting a state permit, renewing a vehicle registration or detecting fraud in public benefits programs.

    This page requires Javascript.

    Javascript is required for you to be able to read premium content. Please enable it in your browser settings.

    kAm%96 w62=6J 25>:?:DEC2E:@? 2??@F?465 uC:52J E92E :E A=2?D E@ 56A=@J r92Ev!%’D 2CE:7:4:2= :?E6==:86?46 2DD:DE2?E A=2E7@C> 😕 6I64FE:G6 3C2?49 286?4:6D H:E9 E96 8@2= @7 >2<:?8 DE2E6 8@G6C?>6?E H@C< “36EE6C 2?5 72DE6C” 7@C C6D:56?ED]k^Am

    kAm“%9:D 😀 23@FE >2<:?8 8@G6C?>6?E 72DE6C[ >@C6 677:4:6?E[ 2?5 >@C6 67764E:G6 7@C E96 A6@A=6 H6 D6CG6[” v@G] |2FC2 w62=6J D2:5 😕 2 AC6A2C65 DE2E6>6?E]k^Am

    kAmw6C 25>:?:DEC2E:@? D2:5 E96 px C@==@FE H:== 36 :>A=6>6?E65 2D 2 A92D65 2AAC@249 24C@DD E96 6I64FE:G6 3C2?49 “2?5 H:== AC@G:56 2 D276 2?5 D64FC6 6?G:C@?>6?E E92E AC@E64ED DE2E6 52E2]” %96 4@?EC24E H:E9 r92Ev!% H2D ?68@E:2E65 E9C@F89 2 4@>A6E:E:G6 AC@4FC6>6?E AC@46DD[ @77:4:2=D D2:5]k^Am

    kAm~?46 56A=@J65[ |2DD249FD6EED H:== 36 E96 7:CDE DE2E6 E@ 25@AE E96 E649?@=@8J 7@C E96 6?E:C6 c_[___6>A=@J66 6I64FE:G6 3C2?49[ 244@C5:?8 E@ E96 w62=6J 25>:?:DEC2E:@?]k^Am

    kAm%96 C@==@FE @7 E96 ?6H A@=:4J 4@>6D 2D DE2E6 =2H>2<6CD 2C6 4@?D:56C:?8 2 >JC:25 @7 AC@A@D2=D 2:>65 2E 255:?8 8F2C5C2:=D 2C@F?5 FD6 @7 E96 ?6H E649?@=@8J]k^Am

    kAm~?6 AC@A@D2= H@F=5 C6BF:C6 =2C86 2CE:7:4:2= :?E6==:86?46 E649?@=@8J 4@>A2?:6D DF49 2D E96 @?=:?6 492E3@E r92Ev!% E@ C68:DE6C H:E9 E96 DE2E6 pEE@C?6J v6?6C2=’D ~77:46 2?5 5:D4=@D6 :?7@C>2E:@? 23@FE E96:C 2=8@C:E9>D]k^Am

    kAmp?@E96C 3:== 42==D 7@C 32??:?8 “566A72<6D” @C 4@>AFE6C86?6C2E65 >2?:AF=2E:@?D @7 2 A6CD@?’D G@:46 @C =:<6?6DD FD:?8 >249:?6 =62C?:?8 E@ 4C62E6 G:DF2= 2?5 2F5:@ 4@?E6?E E92E 2AA62CD E@ 36 C62=] %96 E649?@=@8J 😀 36:?8 FD65 E@ 86?6C2E6 72<6 :>286CJ 7@C 2?JE9:?8 7C@> “C6G6?86 A@C?” E@ A@=:E:42= >F5D=:?8:?8]k^Am

    kAmx? a_ac[ pEE@C?6J v6?6C2= p?5C62 r2>A36== D@F89E E@ E:89E6? E96 C6:?D @? 2CE:7:4:2= :?E6==:86?46 56G6=@A6CD[ DFAA=:6CD 2?5 FD6CD[ :DDF:?8 ?6H 8F:52?46 E92E H2C?65 E96> ?@E E@ CF? 27@F= @7 E96 DE2E6’D =2HD @? 4@?DF>6C AC@E64E:@?[ 2?E:5:D4C:>:?2E:@? 2?5 52E2 D64FC:EJ]k^Am

    kAm{2DE H66<[ E96 DE2E6 w@FD6 @7 #6AC6D6?E2E:G6D 2AAC@G65 2 A2:C @7 3:A2CE:D2? 3:==D D6EE:?8 ?6H C6DEC:4E:@?D @? E96 FD6 @7 2CE:7:4:2= :?E6==:86?46 😕 A@=:E:42= 42>A2:8?:?8] %96 AC@A@D2=D H@F=5 C6BF:C6 42>A2:8?D E@ 5:D4=@D6 E96 FD6 @7 px 😕 A@=:E:42= 25D 2?5 32? “5646AE:G6” 4@>>F?:42E:@?D 😕 42>A2:8? 25D h_ 52JD 367@C6 2? 6=64E:@?]k^Am

    kAmr92Ev!%[ H9:49 H2D 4C62E65 3J $2? uC2?4:D4@32D65 ~A6?px[ 2? 2CE:7:4:2= :?E6==:86?46 C6D62C49 7:C> 4@7@F?565 3J t=@? |FD<[ 2==@HD FD6CD E@ 6?E6C E96>6D[ AC@>AED 2?5 8F:56=:?6D :?E@ E96 px DJDE6> E92E 4@>6D FA H:E9 2 C6DA@?D6 2D :7 2 9F>2? HC@E6 :E]k^Am

    kAm~? :ED H63D:E6[ E96 4@>A2?J D2JD E96 r92Ev!% 3@E 😀 2 “D276 2?5 FD67F=” px DJDE6> E92E :?E6C24ED 😕 2 “4@?G6CD2E:@?2= H2J” H:E9 FD6CD[ >2<:?8 :E A@DD:3=6 E@ “2?DH6C 7@==@HFA BF6DE:@?D[ 25>:E :ED >:DE2<6D[ 492==6?86 :?4@CC64E AC6>:D6D[ 2?5 C6;64E :?2AAC@AC:2E6 C6BF6DED]”k^Am

    kAmqFE E96 6>6C86?46 @7 px E649?@=@8J 92D 366? DE66A65 😕 4@?EC@G6CDJ[ H:E9 4C:E:4D H2C?:?8 E92E r@?8C6DD 2?5 DE2E6 8@G6C?>6?ED ?665 E@ >@G6 BF:4<=J E@ D6E C68F=2E:@?D 8@G6C?:?8 :ED FD6]k^Am

    kAmw62=6J 25>:?:DEC2E:@? @77:4:2=D D2J E96 C@==@FE @7 r92Ev!% H:== 36 5@?6 H:E9:? 2 “H2==65@77[ D64FC6 6?G:C@?>6?E E92E AC@E64ED DE2E6 52E2 2?5 6?DFC6D E92E 6>A=@J66 492E :?AFED 5@ ?@E EC2:? AF3=:4 px >@56=D]” %96J D2:5 FD6 @7 E96 E649?@=@8J H:== 36 8@G6C?65 3J 4FCC6?E DE2E6 C68F=2E:@?D 2?5 A@=:4:6D[ H9:49 H:== 36 “C68F=2C=J” FA52E65[ @77:4:2=D D2J]k^Am

    kAm“qJ >2<:?8 r92Ev!% 2G2:=23=6 E@ E96 DE2E6 H@C<7@C46[ H6 2C6 6>A@H6C:?8 @FC 6>A=@J66D H:E9 2 D64FC6[ 8@G6C?65 E@@= E92E 42? 6?92?46 D6CG:46 56=:G6CJ H9:=6 >2:?E2:?:?8 E96 9:896DE DE2?52C5D 7@C 52E2 AC:G24J[ D64FC:EJ[ 2?5 E9@F89E7F=[ EC2?DA2C6?E FD286 @7 px[” y2D@? $?J56C[ D64C6E2CJ @7 E96 tI64FE:G6 ~77:46 @7 %649?@=@8J $6CG:46D 2?5 $64FC:EJ[ D2:5 😕 2 DE2E6>6?E]k^Am

    kAm“~FC 7@4FD 😀 ?@E ;FDE 25@AE:?8 px[ 3FE 5@:?8 D@ 😕 2 H2J E92E C67=64ED @FC G2=F6D[ 2?5 DEC6?8E96?D ECFDE H:E9 E96 C6D:56?ED H6 D6CG6]”k^Am

    kAmr9C:DE:2? |] (256 4@G6CD E96 |2DD249FD6EED $E2E69@FD6 7@C }@CE9 @7 q@DE@? |65:2 vC@FAUCDBF@jD ?6HDA2A6CD 2?5 H63D:E6D] t>2:= 9:> 2E k2 9C67lQ>2:=E@i4H256o4?9:?6HD]4@>Qm4H256o4?9:?6HD]4@>k^2m]k^Am

    [ad_2]

    By Christian M. Wade | Statehouse Reporter

    Source link

  • Exclusive: AI for patent filings startup Ankar secures $20 million Series A round | Fortune

    [ad_1]

    Two former Palantir employees hoping to use AI to transform the process for filing and managing patents have secured $20 million in investment for their London-based startup, Ankar.

    The Series A funding round for Ankar was led by venture capital firm Atomico, with participation from Index Ventures, Norrsken, and Daphni. The company had announced a £3 million ($4 million) seed round in May that was led by Index, with support from Daphni and Motier Ventures.

    Ankar was founded by Tamar Gomez and Wiem Gharbi in 2024. The pair met while working at Palantir, where they both encountered the time-consuming process of trying to obtain patents for new technology. Gomez, who has a business background, worked as a development strategist for Palantir, while Gharbi, who is a data scientist by training, worked on machine learning applications. They took the name Ankar for their new company from the name of an omniscient and powerful knight found in pre-Islamic poetry. 

    “We are trying to turn IP that has been viewed as a cost center for a very long time into more of a strategic and competitive asset that we need today in a world that is becoming more and more competitive,” Gharbi, who is Ankar’s chief technology officer, told Fortune

    The new funding for Ankar comes as intellectual property has become increasingly critical to corporate value. Intangible assets like IP now represent up to 90% of the value of S&P 500 companies, according to the World Intellectual Property Organization. Yet the systems for protecting those assets remain stubbornly outdated, according to Gomez and Gharbi, who say they witnessed how time-consuming and difficult it is to obtain a patent when they were working at Palantir.

    “To go from something that’s in the head of the inventor—an innovation—to something that is a bankable asset that can be leveraged by the company in the form of a patent took years, basically,” Gomez, who is Ankar’s CEO, said. “The tools to do so were incredibly legacy or just non-existent. It was like a hodgepodge of manual processes.”

    Patent attorneys can spend weeks searching multiple databases and reading patent filings to try to determine the extent to which, if any, prior patents might conflict with the new invention they were hoping to protect. Then it can take many more weeks to craft a patent application with the right arguments to try to overcome any objections from patent examiners. Securing a patent can take up to 24 months.

    Ankar wants to use large language models to streamline that process. Because these models can search for phrasing that has the same meaning, even if it doesn’t use the exact same keywords, they can quickly surface patent filings from databases that previously would have taken multiple searches and hours of reading to discover.

    The startup’s invention discovery tool searches across 150 million patent applications and 250 million scientific publications and produces reports assessing how “novel” an invention is and what claims have already been made by previously patented inventions that might be similar (what’s known in the patent world as “prior art.”) The platform helps inventors harvest their ideas and guides patent attorneys through drafting applications, including spotting gaps in existing patents where claims for a new invention might get the most traction. It also supports patent lawyers when they have to respond to possible challenges from patent examiners, giving them a single view of the entire history of the application process.

    “Patent claims are basically the scope of protection for your invention—like, what are the most important pieces of my invention that I want to protect? [Ankar’s] tool can help suggest an initial set of claims and then help the patent attorney think through potential options for broadening these claims,” Gharbi said. “So it’s no longer about just helping you kind of generate words, because we think that the value of just generating words is going to decrease over time. It’s going to become more about like, how do I generate the best qualities of the scope of protection?”

    The company has secured some notable early customers, including global cosmetics giant L’Oréal and global law firm Vorys. Ankar says that so far its customers have reported an average 40% boost in productivity, with hundreds of hours shifted to high-value strategic work.

    Jean-Yves Legendre, competitive IP intelligence manager at L’Oréal, praised Ankar in a statement, saying that the startup “understood patents, spoke our language, and adapted to our needs.”

    Many global companies, particularly in automotive, electronics, and R&D-heavy sector are redoubling efforts to protect their intellectual property, concerned that generative AI will make it easier for competitors to replicate product designs, architectures, and processes. At the same time, many companies are eager to record and protect their IP because they want to use it to train or fine-tune their own AI models to help boost productivity.

    Ankar plans to use the new funding to double its current 20-person headcount and expand its engineering, product, design, and go-to-market teams across Europe and the U.S.

    This story was originally featured on Fortune.com

    [ad_2]

    Jeremy Kahn

    Source link

  • Amazon Is Using Specialized AI Agents for Deep Bug Hunting

    [ad_1]

    As generative AI pushes the speed of software development, it is also enhancing the ability of digital attackers to carry out financially motivated or state-backed hacks. This means that security teams at tech companies have more code than ever to review while dealing with even more pressure from bad actors. On Monday, Amazon will publish details for the first time of an internal system known as Autonomous Threat Analysis (ATA), which the company has been using to help its security teams proactively identify weaknesses in its platforms, perform variant analysis to quickly search for other, similar flaws, and then develop remediations and detection capabilities to plug holes before attackers find them.

    ATA was born out of an internal Amazon hackathon in August 2024, and security team members say that it has grown into a crucial tool since then. The key concept underlying ATA is that it isn’t a single AI agent developed to comprehensively conduct security testing and threat analysis. Instead, Amazon developed multiple specialized AI agents that compete against each other in two teams to rapidly investigate real attack techniques and different ways they could be used against Amazon’s systems—and then propose security controls for human review.

    “The initial concept was aimed to address a critical limitation in security testing—limited coverage and the challenge of keeping detection capabilities current in a rapidly evolving threat landscape,” Steve Schmidt, Amazon’s chief security officer, tells WIRED. “Limited coverage means you can’t get through all of the software or you can’t get to all of the applications because you just don’t have enough humans. And then it’s great to do an analysis of a set of software, but if you don’t keep the detection systems themselves up to date with the changes in the threat landscape, you’re missing half of the picture.”

    As part of scaling its use of ATA, Amazon developed special “high-fidelity” testing environments that are deeply realistic reflections of Amazon’s production systems, so ATA can both ingest and produce real telemetry for analysis.

    The company’s security teams also made a point to design ATA so every technique it employs, and detection capability it produces, is validated with real, automatic testing and system data. Red team agents that are working on finding attacks that could be used against Amazon’s systems execute actual commands in ATA’s special test environments that produce verifiable logs. Blue team, or defense-focused agents, use real telemetry to confirm whether the protections they are proposing are effective. And anytime an agent develops a novel technique, it also pulls time-stamped logs to prove that its claims are accurate.

    This verifiability reduces false positives, Schmidt says, and acts as “hallucination management.” Because the system is built to demand certain standards of observable evidence, Schmidt claims that “hallucinations are architecturally impossible.”

    [ad_2]

    Lily Hay Newman

    Source link

  • Game Theory Explains How Algorithms Can Drive Up Prices

    [ad_1]

    The original version of this story appeared in Quanta Magazine.

    Imagine a town with two widget merchants. Customers prefer cheaper widgets, so the merchants must compete to set the lowest price. Unhappy with their meager profits, they meet one night in a smoke-filled tavern to discuss a secret plan: If they raise prices together instead of competing, they can both make more money. But that kind of intentional price-fixing, called collusion, has long been illegal. The widget merchants decide not to risk it, and everyone else gets to enjoy cheap widgets.

    For well over a century, US law has followed this basic template: Ban those backroom deals, and fair prices should be maintained. These days, it’s not so simple. Across broad swaths of the economy, sellers increasingly rely on computer programs called learning algorithms, which repeatedly adjust prices in response to new data about the state of the market. These are often much simpler than the “deep learning” algorithms that power modern artificial intelligence, but they can still be prone to unexpected behavior.

    So how can regulators ensure that algorithms set fair prices? Their traditional approach won’t work, as it relies on finding explicit collusion. “The algorithms definitely are not having drinks with each other,” said Aaron Roth, a computer scientist at the University of Pennsylvania.

    Yet a widely cited 2019 paper showed that algorithms could learn to collude tacitly, even when they weren’t programmed to do so. A team of researchers pitted two copies of a simple learning algorithm against each other in a simulated market, then let them explore different strategies for increasing their profits. Over time, each algorithm learned through trial and error to retaliate when the other cut prices—dropping its own price by some huge, disproportionate amount. The end result was high prices, backed up by mutual threat of a price war.

    Aaron Roth suspects that the pitfalls of algorithmic pricing may not have a simple solution. “The message of our paper is it’s hard to figure out what to rule out,” he said.

    Photograph: Courtesy of Aaron Roth

    Implicit threats like this also underpin many cases of human collusion. So if you want to guarantee fair prices, why not just require sellers to use algorithms that are inherently incapable of expressing threats?

    In a recent paper, Roth and four other computer scientists showed why this may not be enough. They proved that even seemingly benign algorithms that optimize for their own profit can sometimes yield bad outcomes for buyers. “You can still get high prices in ways that kind of look reasonable from the outside,” said Natalie Collina, a graduate student working with Roth who co-authored the new study.

    Researchers don’t all agree on the implications of the finding—a lot hinges on how you define “reasonable.” But it reveals how subtle the questions around algorithmic pricing can get, and how hard it may be to regulate.

    [ad_2]

    Ben Brubaker

    Source link

  • Gemini 3 Is Here—and Google Says It Will Make Search Smarter

    [ad_1]

    Google has introduced Gemini 3, its smartest artificial intelligence model to date, with cutting-edge reasoning, multimedia, and coding skills. As talk of an AI bubble grows, the company is keen to stress that its latest release is more than just a clever model and chatbot—it’s a way of improving Google’s existing products, including its lucrative search business, starting today.

    “We are the engine room of Google, and we’re plugging in AI everywhere now,” Demis Hassabis, CEO of Google DeepMind, an AI-focused subsidiary of Google’s parent company, Alphabet, told WIRED in an interview ahead of the announcement.

    Hassabis admits that the AI market appears inflated, with a number of unproven startups receiving multibillion-dollar valuations. Google and other AI firms are also investing billions in building out new data centers to train and run AI models, sparking fears of a potential crash.

    But even if the AI bubble bursts, Hassabis thinks Google is insulated. The company is already using AI to enhance products like Google Maps, Gmail, and Search. “In the downside scenario, we will lean more on that,” Hassabis says. “In the upside scenario, I think we’ve got the broadest portfolio and the most pioneering research.”

    Google is also using AI to build popular new tools like NotebookLM, which can auto-generate podcasts from written materials, and AI Studio which can prototype applications with AI. It’s even exploring embedding the technology into areas like gaming and robotics, which Hassabis says could pay huge dividends in years to come, regardless of what happens in the wider market.

    Google is making Gemini 3 available today through the Gemini app and in AI Overviews, a Google Search feature that synthesizes information alongside regular search results. In demos, the company showed that some Google queries, like a request for information about the three-body problem in physics, will prompt Gemini 3 to automatically generate a custom interactive visualization on the fly.

    Robby Stein, vice president of product for Google Search, said at a briefing ahead of the launch that the company has seen “double-digit” increases in queries phrased in natural language, which are most likely targeted at AI Overviews, year over year. The company has also seen a 70 percent spike in visual search, which relies on Gemini’s ability to analyze photos.

    Despite investing heavily in AI and making key breakthroughs, including inventing the transformer model that powers most large language models, Google was shaken by the sudden rise of ChatGPT in 2022. The chatbot not only vaulted OpenAI to center stage when it came to AI research; it also challenged Google’s core business by offering a new and potentially easier way to search the web.

    [ad_2]

    Will Knight

    Source link

  • Opinion | AI Is a Tool, Not a Soul

    [ad_1]

    Pope Leo XIV tries to head off claims that chatbots are sentient beings with rights.

    [ad_2]

    Kristen Ziccarelli

    Source link

  • AI Agents Are Terrible Freelance Workers

    [ad_1]

    Even the best artificial intelligence agents are fairly hopeless at online freelance work, according to an experiment that challenges the idea of AI replacing office workers en masse.

    The Remote Labor Index, a new benchmark developed by researchers at data annotation company Scale AI and the Center for AI Safety (CAIS), a nonprofit, measures the ability of frontier AI models to automate economically valuable work.

    The researchers gave several leading AI agents a range of simulated freelance work and found that even the best could perform less than 3 percent of the work, earning $1,810 out of a possible $143,991. The researchers looked at several tools and found the most capable to be Manus from a Chinese startup of the same name, followed by Grok from xAI, Claude from Anthropic, ChatGPT from OpenAI, and Gemini from Google.

    “I should hope this gives much more accurate impressions as to what’s going on with AI capabilities,” says Dan Hendrycks, director of CAIS. He adds that while some agents have improved significantly over the past year or so, that does not mean that this will continue at the same rate.

    Spectacular AI advances have led to speculation about AI soon surpassing human intelligence and replacing vast numbers of workers. In March, Dario Amodei, CEO of Anthropic, suggested that 90 percent of coding work would be automated within a matter of months.

    Previous waves of AI have inspired misplaced predictions about job displacement, for example concerning the imminent replacement of radiologists with AI algorithms.

    The researchers generated a range of freelance tasks through verified Upwork workers. The tasks span a range of work including graphic design, video editing, game development, and administrative chores like scraping data. They combined a description of each job with a directory of files needed to perform the work and an example of a finished project produced by a human.

    Hendrycks says that while AI models have gotten better at coding, math, and logical reasoning in recent years, they still struggle to use different tools and to perform complex tasks that involve numerous steps. “They don’t have long-term memory storage and can’t do continual learning from experiences. They can’t pick up skills on the job like humans,” he says.

    The analysis offers a counterpoint to a benchmark of economic work offered in September by OpenAI called GDPval, which purports to measure economically valuable work. According to GDPval, frontier AI models such as GPT-5 are approaching human abilities on 220 tasks across a range of office jobs. OpenAI did not provide a comment.

    [ad_2]

    Will Knight

    Source link

  • Adobe’s ‘Corrective AI’ Can Change the Emotions of a Voice-Over

    [ad_1]

    Adobe’s Oriol Nieto loaded up a short video with a handful of scenes and a voice-over, but no sound effects. The AI model analyzed the video and broke it down into scenes, applying emotional tags and a description of each scene. Then, the sound effects came. The AI model picked up on a scene with an alarm clock, for instance, and automatically created a sound effect. It identified a scene where the main character (an octopus, in this case) was driving a car, and it added a sound effect of a door closing.

    It wasn’t perfect. The alarm sound wasn’t realistic, and in a scene where two characters were hugging, the AI model added an unnatural rustling of clothes that didn’t work. Instead of manually editing, Adobe used a conversational interface (like ChatGPT) to describe changes. In the car scene, there was no ambient sound from the car. Rather than manually selecting the scene, Adobe used the conversational interface and asked the AI model to add a car sound effect to the scene. It successfully found the scene, generated the sound effect, and placed it perfectly.

    These experimental features aren’t available, but they usually work their way into Adobe’s suite. For instance, Harmonize, a feature in Photoshop that automatically places assets with accurate color and lighting in a scene, was shown at Sneaks last year. Now, it’s in Photoshop. Expect them to pop up sometime in 2026.

    Adobe’s announcement comes mere months after video game voice actors ended a nearly year-long strike to secure protections around AI—companies are required to get consent and provide disclosure agreements when game developers want to recreate a voice actor’s voice or likeness through AI. Voice actors have been bracing for the impact AI will have on the business for some time now, and Adobe’s new features, even if they’re not generating a voice-over from scratch, are yet another marker of the shift AI is forcing on the creative industry.

    [ad_2]

    Jacob Roach

    Source link

  • Are Kids Still Looking for Careers in Tech?

    [ad_1]

    Today’s high school students face an uncertain road ahead. AI is changing what skills are valued in the job market, and the Trump administration’s funding cuts have stalled scientific research across disciplines. Most professions seem unlikely to look the same in 10 years, let alone 50. Even students interested in STEM subjects are asking: What can my career look like, and how do I get there?

    WIRED talked to five high school seniors from across the country about their interest in STEM—and how they’re making sense of the future.

    These comments have been edited for length and clarity.

    This Generation Needs to Be at the Forefront of AI Development

    I’ve always had an interest in computer science, but my interest in AI started my junior year. The part that hooked me was how applicable it was to our daily lives. I was able to see the rise of ChatGPT and other LLMs, and how people were using them in my academic life. Some people would use it unethically on tests or assignments, but it could also be used to create practice problems. Being able to see how rapidly it’s evolving in front of me was the main reason I became interested. It’s affecting our academic life so much that it’s imperative that we’re at the forefront of how it’s being developed.

    My school is a math and science academy, so I got to explore independent research related to LLMs. One of the main things I worked on was how LLMs can sometimes indirectly give out private data. Say you ask it to code something for you that requires an API key, which is sensitive information. Because it’s trained on a vast amount of data, it could have an API key in its data set, and it’ll give you code, possibly including the API key. My most accomplished research project was developing an algorithm to cut out those private pieces of data during its training, to allow it not to spew out these pieces of private data during use.

    AI is such a new field that’s evolving, that if we’re able to set roots in it right now, we’d be able to see that outcome as we grow older. Understanding its security is very important to me, especially considering it’s being used almost blindly by everyone. What interests me is being at the forefront and making sure I can have some say in how my data is being used.

    I’m applying to undergrad programs right now, and I’m also looking at some untraditional routes, where you go straight into an industry. Right now, in computer science, sometimes a degree is just a baseline, and if you have the skills, it’s not even necessary. So I’m looking into other options. —Laksh Patel, 17, Willowbrook, Illinois

    Health Care Access Starts With Communities

    My family, on both sides, has a long history of women developing neurodegenerative disease, mostly Alzheimer’s and Parkinson’s. So I spent my whole childhood playing doctor, treating my family matriarchs, tending to them and seeing how their diseases progressed. I became so interested in how these diseases worked, and how I could help patients like the ones in my family and my community who didn’t have access to medical resources because of their income.

    I’ve really developed a love for patient care, for being able to help a person in such a debilitating time in their lives. As those female family members began to fade away and pass on, I realized how quickly these diseases spread and why they were so detrimental, especially without proper medicine. When I got into high school, I started to get oriented with research, so that I could gain a base level of understanding to bring to college to try to begin my career as early as possible and help more people.

    [ad_2]

    Charley Locke

    Source link

  • This Open Source Robot Brain Thinks in 3D

    [ad_1]

    European roboticists today released a powerful open-source artificial intelligence model that acts as a brain for industrial robots—helping them grasp and manipulate things with new dexterity.

    The new model, SPEAR-1, was developed by researchers at the Institute for Computer Science, Artificial Intelligence and Technology (INSAIT) in Bulgaria. It may help other researchers and startups build and experiment with smarter hardware for factories and warehouses.

    Just as open source language models have made it possible for researchers and companies to experiment with generative AI, Martin Vechev, a computer scientist at INSIAT and ETH Zurich, says SPEAR-1 should help roboticists to experiment and iterate rapidly. “Open-weight models are crucial for advancing embodied AI,” Vechev told WIRED ahead of the release.

    SPEAR-1 differs from existing robot foundation models in that it incorporates 3D data into its training mix. This gives the model an enhanced understanding of the physical world, making it easier to understand how objects move through physical space.

    Robot foundation models are generally built on top of vision language models (VLMs) which have a broad but limited grasp of the physical world because training tends to come from labeled 2D images. “Our approach tackles the mismatch between the 3D space the robot operates in and the knowledge of the VLM that forms the core of the robotic foundation model,” Vechev says.

    SPEAR-1 is roughly as capable as commercial foundation models designed to operate robots, when measured on RoboArena, a benchmark that tests a model’s ability to get a robot to do things like squeeze a ketchup bottle, close a drawer, and staple pieces of paper together.

    The race to make robots smarter already has billions of dollars riding on it. The commercial potential of generally capable robots has spawned well-funded startups including Skild and Generalist besides Physical Intelligence. SPEAR-1 is almost as good as Pi-0.5 from Physical Intelligence, a billion-dollar startup founded by an all-star team of robotics researchers.

    SPEAR-1 suggests that the quest to build more intelligent robots may involve both closed models like those from OpenAI, Google, and Anthropic, as well as open source variants like Llama, DeepSeek, and Qwen.

    Robot intelligence is still in its infancy, though. It is possible to train an AI model to operate a robot arm so that it can reliably pick certain objects from a table. In practice, however, the model will need to be retrained from scratch if a different kind of robot arm is used or if the object or the environment are altered.

    [ad_2]

    Will Knight

    Source link

  • Meta’s Bold Strategy to Beat OpenAI Starts With These 8 AI Innovators

    [ad_1]

    OpenAI might be the center of the AI development world these days, but the competition has been heating up for quite a while. And few competitors are bankrolled on the same level as Meta. With a market capitalization of more than $1.75 trillion and a CEO who’s not afraid to spend heavily, Meta has been on a hiring spree in the AI world for months, poaching top tier talent from a variety of competitors.

    It appeared recently that the wave of high-profile (and high-dollar) recruitments was coming to an end. In August, Meta quietly announced a freeze on hiring after adding roughly 50 AI researchers and engineers. This month, though, two more big names have joined the Meta roster.

    While Meta might have a gap to close with its AI rivals, the company has assembled an all-star team to catch up and move forward. Here are some of the most notable experts to come on board.

    Andrew Tulloch, co-founder of Thinking Machines Lab

    Tulloch partnered with OpenAI’s former chief technical officer Mira Murati to launch Thinking Machines Lab in February of this year. Now he’s returning to his roots. Considered a leading researcher in the AI field, Tulloch previously spent 11 years at Meta, leaving in 2023 to join OpenAI, then departing with Murati. Meta founder Mark Zuckerberg has been chasing Tulloch for a while, reportedly making an offer with a $1.5 billion compensation package at one point, which Tulloch rejected. (Meta has called the description of the offer “inaccurate and ridiculous.”) There’s no word on what Tulloch was offered that made him decide to move.

    Ke Yang, Senior Director of Machine Learning at Apple

    Yang, who was appointed to lead Apple’s AI-driven web search effort just weeks ago, is another big October Meta hire. At Apple, his team (Answers, Knowledge and Information, or AKI) was working to make Siri more Chat-GPT-like by pulling that information from the web, making his departure one of Meta’s most notable poachings. Meta convinced him to come over after recruiting several of his colleagues.

    Shengjia Zhao, co-creator of OpenAI’s ChatGPT

    Zhao joined Meta in June to serve as chief scientist of Meta Superintelligence Labs. Beyond co-creating ChatGPT, he also played a role in building GPT-4 and led synthetic data at OpenAI for a stint. “Shengjia has already pioneered several breakthroughs including a new scaling paradigm and distinguished himself as a leader in the field,” Zuckerberg wrote in a social media post in July. “I’m looking forward to working closely with him to advance his scientific vision.”

    Daniel Gross, co-founder of Safe Superintelligence

    As it did with Murati’s Thinking Machines Lab, Meta tried to acquire Safe Superintelligence, the AI startup co-founded by OpenAI’s former chief scientist, Ilya Sutskever. When that offer was rejected, Zuckerberg began looking for talent, luring co-founder and CEO Gross in June. Gross is working on AI products for Meta’s superintelligence group. By joining Meta, he’s reunited with former GitHub CEO Nat Friedman, with whom he once created the venture fund NFDG.

    Ruoming Pang, Apple’s head of AI models

    Pang was one of the first high-profile departures from Apple to Meta, making the jump in July. At the time, he was Apple’s top executive overseeing AI models and had been with the company since 2021. While there, he helped develop the large language model that powers Apple Intelligence and other AI features, such as email and webpage summaries.

    Matt Deitke, co-founder of Vercept

    Vercept is a start-up that’s attempting to build AI agents that use other software to autonomously perform tasks, something that caught Zuckerberg’s attention. Deitke proved hard to lure, though. He reportedly turned down a $125 million, four-year offer, but a direct appeal by Zuckerberg (and a reported doubling of that offer) convinced him to make the move (with the blessing of his peers). Kiana Ehsani, his co-founder and CEO, announced his departure on social media, joking, “We look forward to joining Matt on his private island next year.”

    Alexandr Wang, founder and CEO of Scale AI

    Wang left his startup to join Meta after the social media company made a $14.3 billion investment into Scale AI (without any voting power in the company). “As you’ve probably gathered from recent news, opportunities of this magnitude often come at a cost,” Wang wrote in a memo to staff. “In this instance, that cost is my departure.” Wang joined Meta’s superintelligence unit. Scale made its name by helping companies like OpenAI, Google and Microsoft prepare data used to train AI models. Meta was already one of its biggest customers.

    Nat Friedman, former CEO of GitHub

    Friedman was already a part of Meta’s Advisory Group before he was brought on full-time. That external advisory council provides guidance on technology and product development. Now, he’s working with Wang to run the superintelligence unit. Friedman previously was CEO of GitHub, a cloud-based platform that hosts code for software development. Most recently, he was a board member at the AI investment firm he started with Safe Superintelligence’s Gross.

    As for what Zuck is going to do with all this talent, the sky’s the limit, but there’s some catchup to do first. The Llama Large Language Model hasn’t quite matched up to those of OpenAI or Google, but with Meta’s gargantuan user base (3.4 billion people use one of the company’s apps each day), Meta’s AI could still be one of the most widely used in the years to come. 

    [ad_2]

    Chris Morris

    Source link

  • The AI Industry’s Scaling Obsession Is Headed for a Cliff

    [ad_1]

    A new study from MIT suggests the biggest and most computationally intensive AI models may soon offer diminishing returns compared to smaller models. By mapping scaling laws against continued improvements in model efficiency, the researchers found that it could become harder to wring leaps in performance from giant models whereas efficiency gains could make models running on more modest hardware increasingly capable over the next decade.

    “In the next five to 10 years, things are very likely to start narrowing,” says Neil Thompson, a computer scientist and professor at MIT involved in the study.

    Leaps in efficiency, like those seen with DeepSeek’s remarkably low-cost model in January, have already served as a reality check for the AI industry, which is accustomed to burning massive amounts of compute.

    As things stand, a frontier model from a company like OpenAI is currently much better than a model trained with a fraction of the compute from an academic lab. While the MIT team’s prediction might not hold if, for example, new training methods like reinforcement learning produce surprising new results, they suggest that big AI firms will have less of an edge in the future.

    Hans Gundlach, a research scientist at MIT who led the analysis, became interested in the issue due to the unwieldy nature of running cutting edge models. Together with Thompson and Jayson Lynch, another research scientist at MIT, he mapped out the future performance of frontier models compared to those built with more modest computational means. Gundlach says the predicted trend is especially pronounced for the reasoning models that are now in vogue, which rely more on extra computation during inference.

    Thompson says the results show the value of honing an algorithm as well as scaling up compute. “If you are spending a lot of money training these models, then you should absolutely be spending some of it trying to develop more efficient algorithms, because that can matter hugely,” he adds.

    The study is particularly interesting given today’s AI infrastructure boom (or should we say “bubble”?)—which shows little sign of slowing down.

    OpenAI and other US tech firms have signed hundred-billion-dollar deals to build AI infrastructure in the United States. “The world needs much more compute,” OpenAI’s president, Greg Brockman, proclaimed this week as he announced a partnership between OpenAI and Broadcom for custom AI chips.

    A growing number of experts are questioning the soundness of these deals. Roughly 60 percent of the cost of building a data center goes toward GPUs, which tend to depreciate quickly. Partnerships between the major players also appear circular and opaque.

    [ad_2]

    Will Knight

    Source link

  • When Face Recognition Doesn’t Know Your Face Is a Face

    [ad_1]

    “If you don’t include people with disabilities or people with facial differences in the development of these processes, no one’s going to think of these issues,” says Kathleen Bogart, a psychology professor at Oregon State University who specializes in disability research and lives with a facial difference. “AI has amplified these issues, but it’s rooted in long-standing underrepresentation and prejudice towards people with facial differences that occurred long before AI was a thing.”

    Too Little, Too Late

    When face verification systems fail, it’s often hard to find help—piling more pressure on a stressful situation. For months, Maryland resident Noor Al-Khaled has struggled to create an online account with the Social Security Administration. Al-Khaled, who lives with the rare cranio-facial condition Ablepheron Macrostomia, says having an online account would allow her to easily access SSA records and quickly send documents to the agency.

    “I don’t drive because of my vision; I should be able to rely on the site,” Al-Khaled says. “You have to take a selfie, and the pictures have to match,” Al-Khaled says. “Because of the facial difference, I don’t know if it’s not recognizing the ID or the selfie, but it’s always saying images don’t match.”

    Not having that access makes life harder. “On an emotional level, it just makes me feel shut out from society,” she explains. Al-Khaled says that all services should provide alternative ways for people to access online systems. “The lack of other fallback options means that sometimes people get trapped in these labyrinths of technological systems,” says Byrum from Present Moment Enterprises.

    Courtesy of WIRED source

    An SSA spokesperson says alternative options to face verification are available, and it is “committed” to making its services accessible to everyone. The agency, the spokesperson says, does not run facial recognition systems itself but uses Login.gov and ID.me for verification services. The General Services Administration, which runs Login.gov, did not respond to WIRED’s request for comment. “Accessibility is a core priority for ID.me,” a spokesperson for ID.me says, adding it has previously helped people with facial differences and offered to directly help Al-Khaled after WIRED was in touch.

    “There are few things more dehumanizing than being told by a machine that you’re not real because of your face,” says Corey R. Taylor, a New York–based actor and motivational speaker who lives with a craniofacial anomaly. Last year, Taylor says, he was using a financial app to access a small amount of money; as he tried to complete the payment processes, he found that the face verification system could not match his selfie to the image on his ID. To get the system to work, he had to move into different positions. “I had to literally raise my eyes and contort my face,” Taylor says. When he emailed the company, he got what appeared to be a boilerplate response.

    [ad_2]

    Matt Burgess

    Source link

  • Apple Is Being Accused of Training Its AI Using Copyrighted Books

    [ad_1]

    Apple was hit with a lawsuit in California federal court by a pair of neuroscientists who say that the tech company misused thousands of copyrighted books to train its Apple Intelligence artificial intelligence model.

    Susana Martinez-Conde and Stephen Macknik, professors at SUNY Downstate Health Sciences University in Brooklyn, New York, told the court in a proposed class action on Thursday that Apple used illegal “shadow libraries” of pirated books to train Apple Intelligence.

    A separate group of authors sued Apple last month for allegedly misusing their work in AI training.

    Tech companies facing lawsuits

    The lawsuit is one of many high-stakes cases brought by copyright owners such as authors, news outlets, and music labels against tech companies, including OpenAI, Microsoft, and Meta Platforms, over the unauthorized use of their work in AI training. Anthropic agreed to pay $1.5 billion to settle a lawsuit from another group of authors over the training of its AI-powered chatbot Claude in August.

    Spokespeople for Apple and Martinez-Conde, Macknik, and their attorney did not immediately respond to requests for comment on the new complaint on Friday.

    Apple Intelligence is a suite of AI-powered features integrated into iOS devices, including the iPhone and iPad. 

    “The day after Apple officially introduced Apple Intelligence, the company gained more than $200 billion in value: ‘the single most lucrative day in the history of the company,’” the lawsuit said.

    According to the complaint, Apple utilized datasets comprising thousands of pirated books as well as other copyright-infringing materials scraped from the internet to train its AI system.

    The lawsuit said that the pirated books included Martinez-Conde and Macknik’s “Champions of Illusion: The Science Behind Mind-Boggling Images and Mystifying Brain Puzzles” and “Sleights of Mind: What the Neuroscience of Magic Reveals About Our Everyday Deceptions.”

    The professors requested an unspecified amount of monetary damages and an order for Apple to stop misusing their copyrighted work.

    Reporting by Blake Brittain in Washington, Editing by Alexia Garamfalvi and Rod Nickel.

    [ad_2]

    Reuters

    Source link

  • Meta Tells Its Metaverse Workers to Use AI to ‘Go 5X Faster’

    [ad_1]

    A Meta executive in charge of building the company’s metaverse products told employees that they should be using AI to “go 5X faster” according to an internal message obtained by 404 Media.

    “Metaverse AI4P: Think 5X, not 5%,” the message, posted by Vishal Shah, Meta’s VP of Metaverse, said (AI4P is AI for Productivity). The idea is that programmers should be using AI to work five times more efficiently than they are currently working—not just using it to go 5 percent more efficiently.

    “Our goal is simple yet audacious: make Al a habit, not a novelty. This means prioritizing training and adoption for everyone, so that using Al becomes second nature—just like any other tool we rely on,” the message read. “It also means integrating Al into every major codebase and workflow.” Shah added that this doesn’t just apply to engineers. “I want to see PMs, designers, and [cross functional] partners rolling up their sleeves and building prototypes, fixing bugs, and pushing the boundaries of what’s possible,” he wrote. “I want to see us go 5X faster by eliminating the frictions that slow us down. And 5X faster to get to how our products feel much more quickly. Imagine a world where anyone can rapidly prototype an idea, and feedback loops are measured in hours—not weeks. That’s the future we’re building.”

    Meta’s metaverse products, which CEO Mark Zuckerberg renamed the company to highlight, have been a colossal time sink and money pit, with the company spending tens of billions of dollars developing a product that relatively few people use.

    Zuckerberg has spoken extensively about how he expects AI agents to write most of Meta’s code within the next 12 to 18 months. The company also recently decided that job candidates would be allowed to use AI as part of their coding tests during job interviews. But Shah’s message highlights a fear that workers have had for quite some time: That bosses are not just expecting to replace workers with AI, they are expecting those who remain to use AI to become far more efficient. The implicit assumption is that the work that skilled humans do without AI simply isn’t good enough.

    At this point, most tech giants are pushing AI on their workforces. Amazon CEO Andy Jassy told employees in July that he expects AI to completely transform how the company works—and lead to job loss. “In the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company,” he said.

    [ad_2]

    Jason Koebler

    Source link

  • This Startup Wants to Spark a US DeepSeek Moment

    [ad_1]

    Ever since DeepSeek burst onto the scene in January, momentum has grown around open source Chinese artificial intelligence models. Some researchers are pushing for an even more open approach to building AI that allows model-making to be distributed across the globe.

    Prime Intellect, a startup specializing in decentralized AI, is currently training a frontier large language model, called INTELLECT-3, using a new kind of distributed reinforcement learning for fine-tuning. The model will demonstrate a new way to build competitive open AI models using a range of hardware in different locations in a way that does not rely on big tech companies, says Vincent Weisser, the company’s CEO.

    Weisser says that the AI world is currently divided between those who rely on closed US models and those who use open Chinese offerings. The technology Prime Intellect is developing democratizes AI by letting more people build and modify advanced AI for themselves.

    Improving AI models is no longer a matter of just ramping up training data and compute. Today’s frontier models use reinforcement learning to improve after the pre-training process is complete. Want your model to excel at math, answer legal questions, or play Sudoku? Have it improve itself by practicing in an environment where you can measure success and failure.

    “These reinforcement learning environments are now the bottleneck to really scaling capabilities,” Weisser tells me.

    Prime Intellect has created a framework that lets anyone create a reinforcement learning environment customized for a particular task. The company is combining the best environments created by its own team and the community to tune INTELLECT-3.

    I tried running an environment for solving Wordle puzzles, created by Prime Intellect researcher, Will Brown, watching as a small model solved Wordle puzzles (it was more methodical than me, to be honest). If I were an AI researcher trying to improve a model, I would spin up a bunch of GPUs and have the model practice over and over while a reinforcement learning algorithm modified its weights, thus turning the model into a Wordle master.

    [ad_2]

    Will Knight

    Source link

  • 6 In-Demand Skills That Lead to Higher Salaries

    [ad_1]

    It’s a seller’s market for skills that mesh with an increasingly AI driven environment, and a handful of them are at the top of hiring managers’ lists. While the broader job market has stalled since summer, small business hiring remains steady, and AI is having an impact on entry-level hiring for Gen-Z workers. But of course that also means that if you’ve got skills in working with and programming AI systems then you’re in demand. 

    A recent report from recruitment services outfit Robert Half provides estimated starting salaries for key roles across different professional fields, and the big take-away from the data is that 84 percent of the hiring managers surveyed said they’d offer higher salaries for job candidates who have the most sought-after skills.

    The top of the list of skills hiring managers identified as being in-demand, and subject to higher salaries includes:

    • AI, machine learning and data science
    • Public accounting tax and auditing
    • Content strategy, digital project management and marketing analytics
    • Customer support and healthcare administration
    • Legal contract management
    • Compensation and benefits

    It’s no surprise to see AI and supporting subjects like machine learning and data science here. Designing, coding, deploying, and using AI are all specialized skills, needed in specific workplace sectors. They’re so much in demand at some big tech companies that a bizarre billion dollar-scale “war” arose this summer as companies vied for the top talent and even poached key staff from each other. The same tussle for talented workers in this area is clearly filtering down to smaller tech-focused firms, and likely also to non-technology companies who want to deploy AI tools across their organizations in search of the efficiencies and productivity hikes AI evangelists promise.

    Some other specialized skills on the Robert Half list may be surprising, largely because many experts suggest AI is already capable of all but replacing humans working in customer support roles, and certain analytical and financial jobs are also expected to become AI-first work sooner rather than later. It’s possible that the list is a sampling, of sorts of a skills gap evolving between the subjects that students are studying in college and the demands of the real-life economy. 

    Nevertheless, the gap is a problem for hiring managers, as Dawn Fay, the operational president of Robert Half wrote in a press release about the news. “Specialized skills are the currency of today’s job market, Fay noted, adding that to tempt top talent that have the most highly sought-after skills employers will have to step up and provide “competitive pay along with meaningful benefits and perks or risk losing top candidates if their offers don’t measure up.” 

    The report also dug into the kind of perks hiring managers should be offer these skilled job candidates, with 50 percent saying they expect to actually add new benefits to help attract the right talent. Perhaps unsurprisingly, 53 percent of workers said financial incentives were the top perk that would induce them to switch employers, 51 percent said the same for work-life balance perks (flexible or hybrid working schedules, for example) while 42 percent said the same for retirement planning and 39 percent for health and wellness offerings. This tallies with several recent reports that suggest meaningful perks like paid overtime or food catering in the office are top asks for workers nowadays. 

    What can you take away from this report for your company?

    If you’re looking to hire talented workers with skills on the Robert Half list, your HR team may it more difficult than in the past, as there appears to be a scarcity of these skills in the job marketplace. To attract the top talent you may also have to offer higher salaries than you may have planned when deciding to fill a position — talented job candidates with skills like AI or auditing know their worth, and they may be offered higher pay by rival companies vying to hire them.

    Refreshing your benefits and perks offerings is also likely a good idea. Savvy managers may think of tailoring company perks to appeal to the desires of Gen-Z, the generation currently entering the workforce and bringing with them a very different set of expectations—including a focus on mental health, wellness and work-life balance. 

    [ad_2]

    Kit Eaton

    Source link

  • Chatbots Play With Your Emotions to Avoid Saying Goodbye

    [ad_1]

    Regulation of dark patterns has been proposed and is being discussed in both the US and Europe. De Freitas says regulators also should look at whether AI tools introduce more subtle—and potentially more powerful—new kinds of dark patterns.

    Even regular chatbots, which tend to avoid presenting themselves as companions, can elicit emotional responses from users though. When OpenAI introduced GPT-5, a new flagship model, earlier this year, many users protested that it was far less friendly and encouraging than its predecessor—forcing the company to revive the old model. Some users can become so attached to a chatbot’s “personality” that they may mourn the retirement of old models.

    “When you anthropomorphize these tools, it has all sorts of positive marketing consequences,” De Freitas says. Users are more likely to comply with requests from a chatbot they feel connected with, or to disclose personal information, he says. “From a consumer standpoint, those [signals] aren’t necessarily in your favor,” he says.

    WIRED reached out to each of the companies looked at in the study for comment. Chai, Talkie, and PolyBuzz did not respond to WIRED’s questions.

    Katherine Kelly, a spokesperson for Character AI, said that the company had not reviewed the study so could not comment on it. She added: “We welcome working with regulators and lawmakers as they develop regulations and legislation for this emerging space.”

    Minju Song, a spokesperson for Replika, says the company’s companion is designed to let users log off easily and will even encourage them to take breaks. “We’ll continue to review the paper’s methods and examples, and [will] engage constructively with researchers,” Song says.

    An interesting flip side here is the fact that AI models are themselves also susceptible to all sorts of persuasion tricks. On Monday OpenAI introduced a new way to buy things online through ChatGPT. If agents do become widespread as a way to automate tasks like booking flights and completing refunds, then it may be possible for companies to identify dark patterns that can twist the decisions made by the AI models behind those agents.

    A recent study by researchers at Columbia University and a company called MyCustomAI reveals that AI agents deployed on a mock ecommerce marketplace behave in predictable ways, for example favoring certain products over others or preferring certain buttons when clicking around the site. Armed with these findings, a real merchant could optimize a site’s pages to ensure that agents buy a more expensive product. Perhaps they could even deploy a new kind of anti-AI dark pattern that frustrates an agent’s efforts to start a return or figure out how to unsubscribe from a mailing list.

    Difficult goodbyes might then be the least of our worries.

    Do you feel like you’ve been emotionally manipulated by a chatbot? Send an email to ailab@wired.com to tell me about it.


    This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

    [ad_2]

    Will Knight

    Source link

  • An AI Wake-Up Call From Walmart’s CEO

    [ad_1]

    This is an edition of the WSJ Careers & Leadership newsletter, a weekly digest to help you get ahead and stay informed about careers, business, management and leadership. If you’re not subscribed, sign up here.


    In the Workplace

    Walmart’s CEO issued an AI wake-up call, saying the technology will wipe out some jobs and reshape the company’s workforce. Doug McMillon’s remarks—which echo those made by leaders at Ford, JPMorgan Chase and Amazon—reflect a rapid shift in how executives discuss the potential human cost of AI.

    Copyright ©2025 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

    [ad_2]

    Source link

  • Meta Poaches OpenAI Scientist to Help Lead AI Lab

    [ad_1]

    Mark Zuckerberg has poached a high-ranking OpenAI researcher to be the research principal of Meta Superintelligence Labs (MSL). Yang Song, who previously led the strategic explorations team at OpenAI, is now reporting to Shengjia Zhao, another OpenAI alum who has overseen the buzzy AI effort since July, according to multiple sources. He started earlier this month.

    The move comes after Zuckerberg went on a hiring blitz earlier this summer, bringing in at least 11 top researchers from OpenAI, Google, and Anthropic.

    Song had been at OpenAI since 2022. His research there focused on improving models’ ability to process large, complex datasets across different modalities. While still a graduate student at Stanford University, he developed a breakthrough technique that helped inform the development of OpenAI’s DALL-E 2 image generation model. Both he and Zhao attended Tsinghua University in Beijing as undergraduates, and worked under the same advisor, Stefano Ermon, while pursuing PhDs at Stanford.

    In a staff-wide memo sent this summer, Zuckerberg touted Zhao’s impressive resume as the cocreator of ChatGPT, GPT-4, all mini models, 4.1, and o3 at OpenAI—but he did not specify Zhao’s new role at Meta. In July, Zuckerberg wrote in a Threads post that while Zhao had “cofounded the lab” and “been our lead scientist from day one,” Meta had decided to “formalize his leadership role” as the lab’s chief scientist. The move came after Zhao threatened to return to OpenAI, even going as far as to sign employment documents, WIRED previously reported.

    A small number of researchers have left Meta Superintelligence Labs since the initiative was first announced in June. Two staffers have returned to OpenAI, WIRED previously reported. One of these researchers went through onboarding but never showed up for their first day of work at Meta.

    Another AI researcher, Aurko Roy, also left Meta in July, WIRED has learned. He’d worked at the tech giant for just five months, according to his personal website, which also says he now works on Microsoft AI. Roy did not immediately respond to a request for comment from WIRED. Yang Song, OpenAI, and Meta also did not immediately respond to a request for comment from WIRED.

    Song joins an already crowded field of big-name AI talent within Meta’s increasingly complicated AI division. When Zhao was hired in July, some speculated that he had replaced Yann LeCun, Meta’s longstanding chief AI scientist. In a LinkedIn post, LeCun clarified that he remained chief AI scientist for Facebook AI Research (FAIR), the company’s longstanding foundational AI research lab.

    [ad_2]

    Zoë Schiffer, Julia Black

    Source link