ReportWire

Tag: elevenlabs

  • ElevenLabs raises $500M from Sequioia at a $11 billion valuation | TechCrunch

    [ad_1]

    Voice AI company ElevenLabs said today it raised $500 million in a new funding round led by
    Sequoia Capital, which was an investor in the startup’s last secondary round through a tender. Sequoia partner Andrew Reed is joining the company’s board.

    The startup is now valued at $11 billion, more than three times its valuation in its last round in January 2025. Earlier in the year, the Financial Times reported that the startup was looking to raise at that valuation.

    The company said that existing investor a16z quadrupled its investment amount, and ICONIQ, which led the last round, tripled it. Some prior investors, like BroadLight, NFDG, Valor Capital, AMP Coalition, and Smash Capital, also joined the round. New investors for the funding included Lightspeed Venture Partners, Evantic
    Capital, BOND.

    ElevenLabs said that it will disclose some investors later in February, which might be strategic partners. The company has raised over $781 million to date. It said that it will use the funding for research and product development, along with expansion in international markets like India, Japan, Singapore, Brazil, and Mexico.

    The company’s co-founder, Mati Staniszewski, indicated that ElevenLabs might work on agents beyond voice and incorporate video. In January, the company announced a partnership with LTX to produce audio-to-video content.

    “The intersection of models and products is critical – and our team has proven, time and again, how to translate research into real-world experiences. This funding helps us go beyond voice alone to transform how we interact with technology altogether. We plan to expand our Creative offering – helping creators combine our best-in-class audio with video and Agents – enabling businesses to build agents that can talk, type, and take action,” he said in a statement.

    The company has seen good growth momentum as it closed the year at $330 million ARR. In an interview with Bloomberg earlier this year, Staniszewski said that it took ElevenLabs five months to reach from $200 million to $300 million in ARR.

    Voice AI model providers are an attractive target for investors and big tech companies. In January, rival Deepgram raised $130 million from AVP at a $1.3 billion valuation. Meanwhile, Google hired top talent from voice model company Hume AI, including CEO Alan Cowen.

    [ad_2]

    Ivan Mehta

    Source link

  • How Twelve Labs Teaches A.I. to ‘See’ and Transform Video Understanding: Interview

    [ad_1]

    Soyoung Lee, co-founder and head of GTM at Twelve Labs, pictured at Web Summit Vancouver 2025. Photo by Vaughn Ridley/Web Summit via Sportsfile via Getty Images

    Sure, the score of a football game is important. But sporting events can also foster cultural moments that slip under the radar—such as Travis Kelce signing a heart to Taylor Swift in the stands. While such footage could be social-media gold, it’s easily missed by traditional content tagging systems. That’s where Twelve Labs comes in.

    “Every sports team or sports league has decades of footage that they’ve captured in-game, around the stadium, about players,” Soyoung Lee, co-founder and head of GTM at Twelve Labs, told Observer. However, these archives are often underutilized due to inconsistent and outdated content management. “To date, most of the processes for tagging content have been manual.”

    Twelve Labs, a San Francisco-based startup specializing in video-understanding A.I., wants to unlock the value of video content by offering models that can search vast archives, generate text summaries and create short-form clips from long-form footage. Its work extends far beyond sports, touching industries from entertainment and advertising to security.

    “Large language models can read and write really well,” said Lee. “But we want to move on to create a world in which A.I. can also see.”

    Is Twelve Labs related to Eleven Labs?

    Founded in 2021, Twelve Labs isn’t to be confused with ElevenLabs, an A.I. startup that specializes in audio. “We started a year earlier,” Lee joked, adding that Twelve Labs—which named itself after the initial size of its founding team—often partners with ElevenLabs for hackathons, including one dubbed “23Labs.”

    The startup’s ambitious vision has drawn interest from deep-pocketed backers. It has raised more than $100 million from investors such as Nvidia, Intel, and Firstman Studio, the studio of Squid Game creator Hwang Dong-hyuk. Its advisory bench is equally star-studded, featuring Fei-Fei Li, Jeffrey Katzenberg and Alexandr Wang.

    Twelve Labs counts thousands of developers and hundreds of enterprise customers. Demand is highest in entertainment and media, spanning Hollywood studios, sports leagues, social media influencers and advertising firms that rely on Twelve Labs tools to automate clip generation, assist with scene selection or enable contextual ad placements.

    Government agencies also use the startup’s technology for video search and event retrieval. Beyond its work with the U.S. and other nations, Lee said that Twelve Labs has a deployment in South Korea’s Sejong City to help CCTV operators monitor thousands of camera feeds and locate specific incidents. To reduce security risks, the company has removed capabilities for facial and biometric recognition, she added.

    Will video-native A.I. come for human jobs?

    Many of the industries Twelve Labs serves are already debating whether A.I. threatens humans jobs—a concern Lee argues is only partly warranted. “I don’t know if jobs will be lost, per se, but jobs will have to transition,” she said, comparing the shift to how tools like Photoshop reshaped creative roles.

    If anything, Lee believes systems like Twelve Labs’ will democratize creative work traditionally limited to companies with big budgets. “You are now able to do things with less, which means you have more stories that can be created from independent creatives who do not have that same capital,” she said. “It actually allows for the scaling of content creation and personalizing distribution.”

    Twelve Labs is not the only A.I. player eyeing video, but the company insists it serves a different need than its much larger competitors. “We’re excited that video is now starting to get more attention, but the way we’re seeing it is a lot of innovation in large language models, a lot of innovation in video generation models and image generation models like Sora—but not in video understanding,” said Lee, referencing OpenAI’s text-to-video A.I. model and app.

    For now, Twelve Labs offers video search, video analysis and video-to-text capabilities. The company plans to expand into agentic platforms that can not only understand video but also build narratives from it. Such models could be useful beyond creative fields, Lee said, pointing to examples like retailers identifying peak foot-traffic hours or security clients mapping the sequence of events surrounding an accident.

    While A.I. might help a Hollywood director assemble a movie, Lee believes it won’t ever be the director. Even if the technology can provide narrative options, humans still decide which story is most compelling, identify gaps and supply the footage. “At the end of the day, I think there’s nothing that can replace human creative intent.”

    How Twelve Labs Teaches A.I. to ‘See’ and Transform Video Understanding: Interview

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Virginia Rep. Jennifer Wexton has her voice back, and she’s ready to talk – WTOP News

    Virginia Rep. Jennifer Wexton has her voice back, and she’s ready to talk – WTOP News

    [ad_1]

    Virginia Rep. Jennifer Wexton now has a new voice, that just so happens to be her old voice, thanks to software company ElevenLabs.

    Virginia Rep. Jennifer Wexton touched a lot of hearts this week by sharing a video on X that let the world know that she has found her voice.

    Virginia Rep. Jennifer Wexton uses an AI replica of her voice to allow her to “speak” at meetings and hearings.(Credit Jennifer Wexton/X)

    Since 2019, Wexton has served the people of the 10th district in the state’s House of Representatives.

    Last September, Wexton announced that she had been diagnosed with progressive supranuclear palsy, or PSP, and would not be running for election. Wexton described PSP as a kind of “Parkinson’s on steroids.”

    Due to the effect PSP had on the volume and clarity of the congresswoman’s speech, she started using a text-to-speech app, including on the House floor.

    Now, Wexton’s new voice just so happens to be her old voice, thanks to software company ElevenLabs.

    The software company created an AI voice model of Wexton’s voice by using a collection of her old speeches that were provided by her staff.

    The congresswoman spoke with WTOP’s Jimmy Alexander who asked her some questions, and she answered them in her own words.

    The transcript below has been lightly edited for clarity.

    WTOP asks Wexton what was it like hearing her AI voice for the first time.

    Virginia Rep. Jennifer Wexton: Hearing my new-old AI voice made me cry happy tears.

    As a former prosecutor who argued cases in court, and now as a politician, using my voice has always been an integral part of what I do and who I am. A politician who can’t do public speaking will become a former politician in short order.

    That’s why developing this AI voice model has meant so much to me. I also feel that is an important way to show that just because my speech may not be what it used to be, doesn’t mean my words are any less mine or any less important for others to hear.

    For people who face health or accessibility challenges, as I am; our abilities do not define us.

    WTOP asks Weston what advice she would give to someone if they were diagnosed with PSP.

    Wexton: People are going to offer to help you and you should take them up on it. Even if you can do everything now, you likely won’t be able to soon.

    I’m also incredibly fortunate to have an amazing support network of family, friends and staff nearby, without whom I would not be able to do it all. So go ahead and accept that ride or let a friend do the grocery shopping for you. You’ll be glad you did.

    After being diagnosed, I sought out a variety of medical advice regarding the best ways to manage my illness. Staying engaged with others, working out, speech, physical and talk therapies and finding medication that helps elevate my symptoms are ways that I’ve been able to continue living my life and doing the job I love. Giving into the disease accelerates the progression; staying active helps me feel better physically and mentally.

    Get breaking news and daily headlines delivered to your email inbox by signing up here.

    © 2024 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.

    [ad_2]

    Ciara Wells

    Source link

  • Their children were shot, so they used AI to recreate their voices and call lawmakers

    Their children were shot, so they used AI to recreate their voices and call lawmakers

    [ad_1]

    The parents of a teenager who was killed in Florida’s Parkland school shooting in 2018 have started a bold new project called The Shotline to lobby for stricter gun laws in the country. The Shotline uses AI to recreate the voices of children killed by gun violence and send recordings through automated calls to lawmakers, The Wall Street Journal reported.

    The project launched on Wednesday, six years after a gunman killed 17 people and injured more than a dozen at a high school in Parkland, Florida. It features the voice of six children, some as young as ten, and young adults, who have lost their lives in incidents of gun violence across the US. Once you type in your zip code, The Shotline finds your local representative and lets you place an automated call from one of the six dead people in their own voice, urging for stronger gun control laws. “I’m back today because my parents used AI to recreate my voice to call you,” says the AI-generated voice of Joaquin Oliver, one of the teenagers killed in the Parkland shooting. “Other victims like me will be calling too.” At the time of publishing, more than 8,000 AI calls had been submitted to lawmakers through the website.

    “This is a United States problem and we have not been able to fix it,” Oliver’s father Manuel, who started the project along with his wife Patricia, told the Journal. “If we need to use creepy stuff to fix it, welcome to the creepy.”

    To recreate the voices, the Olivers used a voice cloning service from ElevenLabs, a two-year-old startup that recently raised $80 million in a round of funding led by Andreessen Horowitz. Using just a few minutes of vocal samples, the software is able to recreate voices in more than two dozen languages. The Olivers reportedly used their son’s social media posts for his voice samples. Parents and legal guardians of gun violence victims can fill up a form to submit their voices to The Shotline to be added its repository of AI-generated voices.

    The project raises ethical questions about using AI to generate deepfakes of voices belonging to dead people. Last week, the Federal Communications Commission declared that robocalls made using AI-generated voices were illegal, a decision that came weeks after voters in New Hampshire received calls impersonating President Joe Biden telling them to not vote in their state’s primary. An analysis by security company called Pindrop revealed that Biden’s audio deepfake was created using software from ElevenLabs.

    The company’s co-founder Mati Staniszewski told the Journal that ElevenLabs allows people to recreate the voices of dead relatives if they have the rights and permissions. But so far, it’s not clear whether parents of minors had the rights to their children’s likenesses.

    [ad_2]

    Pranav Dixit

    Source link